... is a Prof. in thr Department of Industrial Engineering in INHA University, South Korea.
Futhermore, Prof. Dr. Wooky Lee also held a number of roles including:
- Director: of the VOICE AI Institute, Inha Univ.
- Steering Committee Member of IEEE BigComp
- Executive Committee member of IEEE TCDE
- Associate Editor of the World Wide Web Journal,
SUPE, CLUS, etc.
- Editor-in-Chief of Big Data Service Journal
He has also served as chair and program committee member of numerous international conferences. He is an expert in the BigData area, computer security as well as AI. He has been invited as keynote speaker at many conferences and also received many best paper awards. As a result of his research has received beside a record of publications more than 100 patents, including:
“DEVICE FOR SUPPLEMENTING VOICE AND METHOD
FOR CONTROLLING THE SAME” US9271066B2
Wookey Lee is the author of several textbooks.
Why is a team greater than the sum of its members’ capabilities? Forging a team depends upon solid collaboration among the team members amalgamated with each member’s abilities. These two aspects challenge finding the right mix of members with a novel notion of Synergy from Graph G. This notion has three main goals:
(i) introducing the concept of Team Synergy Problem (TSP) and proposing a novel function, (ii) identifying the intrinsic structure of G for predicting potential Systems, and (iii) developing a top-k Team Synergy Algorithm (TSA).
Specifically, the TSP can be formulated by embedding three essential elements, Communication, Cooperativeness, and Complementarity to quantify the Synergy between adjacent experts, construct a Synergy graph, GS. We can prove that the TSP is NP-hard and propose TSA to form top-k teams from GS within a budget B. TSA uses PSEUDO-STAR configurations to prune instances efficiently. Moreover, it uses a tensor decomposition method, RESCAL, to exploit the tensored Synergy graph to predict the potential Synergies on the unknown edges and recommend new teammates to a given team.
The experimental results can be viewed by several real datasets, which can be shown that TSA significantly outperforms the state-of-the-art algorithms so far.
... currently is an Associate Professor at the Faculty of Information Technology and Digital Innovation, King Mongkut’s University of Technology North Bangkok (KMUTNB), Thailand. He also serves as the Director of Central Library at KMUTNB. Phayung received Bachelor of Science in Technical Education (Teaching in Electrical Engineering), from KMUTNB in 1994. He received Master of Science (MS) and Doctor of Philosophy (Ph.D.) in Electrical Engineering from School of Electrical and Computer Engineering, Oklahoma State University (OSU), Stillwater, USA, in 1998 and 2002, respectively. His research of interests are Artificial Intelligence, Big Data Analytics, Business Intelligence and Analytics, Computational Intelligence, Data Analytics, Data Mining, Data Science, Deep Learning, Digital Signal Processing, Image Processing, Machine Learning, Metaheuristics Optimization, Natural Language Processing, and Time Series Analysis.
Artificial Intelligence (AI) applications have been growing recently based on Reinforcement learning (RL), a type of machine learning where an agent learns to behave in an environment by trial and error. The agent receives a reward for actions that lead to desired outcomes and a penalty for actions that lead to undesired results. Deep reinforcement learning (DRL) uses deep learning to represent the agent's state and the environment, allowing DRL agents to learn in complex environments with states and actions to get optimal rewards. Several domains apply DRL: game playing, robotics, and finance. In finance, DRL has been used to develop trading algorithms that can automatically buy and sell stocks. One of the challenges of using DRL for stock trading is that the stock market is a very complex environment. Several factors can affect the price of a stock, and it is difficult to predict how these factors will change in the future. A challenge is that the stock market is a very competitive environment. Many other traders are also trying to make money by buying and selling stocks. This means that it is important for DRL agents to be able to learn quickly and adapt to changes in the market. DRL agents can learn to identify patterns in the market that humans may not be able to see. They can also learn to adapt to changes in the market much faster than humans. DRL is a promising new technology for stock trading. It can be a very powerful tool for making money in the stock market. However, it is still a relatively new technology requiring improvement to overcome existing problems before being applied to real applications. This talk reviews state-of-the-art related to deep reinforcement learning and stock time series prediction with multivariate stock technical analysis.