Tatpong Katanyukul.
Academic Background
- B.Eng. (Electronics Engineering), King ‘s Mongkut Institute of Technology, Ladkrabang
- M.Eng. (Computer Science), Asian Institute of Technology
- Ph.D. (Mechanical Engineering), Co lorado State University
Academic Areas of Interest
- Approximate Dynamic Programming, including Reinforcement Learning.
- Machine Learning Applications.
Contact me: tatpong at kku dot ac dot th
Warren B. Powell: “The most important dimension of ADP is `learning how to learn’, and as a result the process of getting approximate dynamic programming to work can be a rewarding educational experience.”
For a student looking for a research topic, these may interest you:
- Reinforcement Learning with Adaptive Learning Rate.
There are many adaptive learning schemes out there. Most of them are developed for static applications, like supervised learning. Are they still good for dynamic systems? Or, can we have something better? Look back to our own learning process, do we have a learning rate? And, what’s its value? Is it fixed? Is it adaptive? How is it adjusted? Have you ever noticed when your brain is really running? And when it’s … mm … mm … what’s the word? … stall. So, our biological learning rate is not fixed. What changes it? How is it changed? Is it good? What can we use for our learning agent? “Fear is a friend who’s misunderstood.” is a catchy line in a song Heart of Life, by John Mayer. Does fear actually do any good? Does it help speed up our thinking process. (Or, does it slow down?, if you get, what’s my brother calls, “brain block”.) What role does feeling have in our learning process? How can we develop something based on a similar idea to adapt the learning rate of a learning agent? If interested, check out Warren B. Powell’s Ap proximate Dynamic Programming: Solving the Curses of Dimensionality, John Wiley & Sons 2011 for adaptive step size (Chapter 11).
- Transferring Learning.
Reinforcement Learning works fine, but do we have to start learning from scratch every time? We could not get far with a learning agent that has to be trained from scratch every single task it is assigned to. How about us? Do we learn from scratch every time? Once a knight learn hand combat, does it help him in learning sword fight? In school, we are trained to do a simple task first. Then, we tackle a more complicated task. Little by little, we are good at it and sometimes forget how far we have come. On the other hand, tackling a complicated task directly, without prior experience with simple related task, brings frustration and mostly causes people to give up. Think about teaching your daughter to drive for the first time on a bustling street of Bangkok during rush hour. Since we transfer our learning, can we use a mechanism based on transfer learning to build an agent capable of doing complicated tasks? How can we build such a mechanism? What kind of structure of a learning agent allows transferring learning? “A typical goal of transfer [learning] is to reduce the time needed to learn a [task] after first learning [another task]”. [Taylor, Kuhlman, and Stone. Transfer Learning and Intelligence: an Argument and Approach, in The First Conference on Artificial General Intelligence, 2008] If you are interested, there’s a short talk (~8 mins) Transfer Learning and Intelligence: An Argument and Approach, by Matthew E. Taylor, that may give you a rough idea how we can transfer learning.
- Reinforcement Learning with Assistance.
“You never walk alone.”, a familiar line, reminds us that we will have helps and supports when we need. That’s nice. Does a learning agent have to work alone? It explores and exploits what it has learned to optimize its assigned objective without any help or support. Does It have to stay that way. How can we incorporate our help into the framework of reinforcement learning? What will be a good way to allow occasional human (or other agent)’s input into a learning agent? Will it be beneficial to let an agent be able to ask our advice or opinion on something it’s uncertain? Although this sounds like a good idea (at least, this is what we do in our society, right? Helping each other. Are we still doing that?), but so far, as of April 4th, 2012, I have not found many works on this direction. One of them is Using Active Relocation to Aid Reinforcement Learning, by Mihalkova and Mooney, in Proceedings of the 19th International FLAIRS Conference 2006. But, Mihalkova and Mooney allow an agent to ask for changing its state. This allows their agent to skip an easy part of the task and go directly to where it really wants to practice. This idea sounds more like active learningfor reinforcement learning, rather than a kind of assistance that I have in mind. What I have in mind is a mechanism allowing occasional external input, e.g., human guidance (or another agent’s experience in a multi-agent environment, perhaps when another agent has learned the state the agent is facing).
- Reinforcement Learning with Plastic Memory.
This is to take reinforcement learning another step. A conventional RL agent has a fixed size memory. Of course, we can start up a RL agent with a huge memory capable of holding every thing it may learn. However, this approach is likely to cause slow learning process. On the other hand, a RL agent with a small memory may learn fast at the beginning, but it has limited capacity. What should we do? The answer may lie in how we learn. The memory size does not have to be fixed. When we learn to do something, first we learn to do it roughly, then we gradually refine our skills. Our memory is growing as the learning process goes: it starts with a small portion allocated for the new learning experience, later when we still keep doing it, this portion of memory grows. There are some works in this area. If you are interested, Montazeri et al’s Continuous state/action reinforcement learning: A growing self-organizing map approach in Neurocomputing 74 (2011) may be a good starting point.
- Deep Learning.
Naturally, our perception works in hierarchy: we perceive things in multiple levels of abstraction, e.g., knights are warriors, in medieval europe; they live by the knight’s code and have great chivalry skills; then, up one abstraction level, a warrior is a human trained for combating; then, up one more level, a human is a creature of our species, and so on. “[In deep architectures,] lower- level features or concepts are progressively combined into more abstract and higher-level representations. … [Deep architectures are] crucial in order to make progress on the kind of complex tasks required for artificial intelligence.” Bengio and LeCun, Scaling Learning Algorithm towads AI, in Large-Scale Kernel Machines. MIT Press 2007. To start digging further, watch Tutorial: Learning Deep Architectures, by Yoshua Bengio and Yann LeCun.
- Active Learning.
This is like a principle of choice. In life, we always have a choice. In learning, we don’t have to be passive. Rather than waiting for data to be fed in, we can ask a question whose answer would help us understand the subject better. In many cases, this speeds up our learning process. This idea works for us so that it has been developed for machines. “The key idea behind active learning is that a machine learning algorithm can achieve greater accuracy with fewer training labels if it is allowed to choose the data from which it learns.” [Burr Settles. Active Learning Literature Survey, Computer Sciences Technical Report 1648 University of Wisconsin- Madison 2010] Want more? Watch Sanjoy Dasgupta and John Langford’s Active Learning.
- Statistical Machine Translation.
Language is not everything, but it’s our major medium to pass on our knowledge. It’s the main method of knowledge representation in our world (, before we have youtube but after cave drawing). Needless to say, language translation is ever needed. Automatic natural language translation will not only bridge understanding of people speaking different languages, it will help broaden the perspective of individuals and allow intellectual freedom, especially in our current diverse world culture. Statistical Machine Translation (SMT) employs machine learning techniques, so that “[after applying] a learning algorithm to a large body of previously translated text, known variously as a parallel corpus, parallel text, bitext, or multitext. The [automatic translator] is then able to translate previously unseen sentences.” [Adam Lopez. Statistical Machine Translation, ACM Computing Surveys, 40(3), 2008] There are many online videos to help you get started on SMT. To pick one, try Philipp Koehn’s Phrase-based and factored statistical machine translation.
- Others.
Frankly, there are a lot of things seemed to be interesting. But, I don’t mean to keep track of everything here. First, because having too many things is like having nothing. It will bore you. Exhaust you. And, keep you away. This is like a principle of being focus. Second, because this is a broad and highly active research area. I know only a little. And, new terms, new concepts, and new ideas keep coming. In this field, and as in life, I’m a student. There’s always something to learn. Is this a fun area to work on or what?
“What do you first do when you learn to swim? You make mistakes, do you not? And what happens? You make other mistakes, and when you have made all the mistakes you possibly can without drowning – and some of them many times over – what do you find? That you can swim? Well – life is just the same as learning to swim! Do not be afraid of making mistakes, for there is no other way of learning how to live!” – Alfred Adler