Scalable Internal-State Policy-Gradient Methods for POMDPs Douglas Aberdeen Jonathan Baxter Feature Subset Selection and Inductive Logic Programming Erick Alphonse Stan Matwin Semi-supervised Clustering by Seeding Sugato Basu Arindam Banerjee Raymond Mooney Inductive Logic Programming out of Phase Transition: A bottom-up constraint-based approach Jacques Ales Bianchetti Celine Rouveirol Michele Sebag Exploiting Relations Among Concepts to Acquire Weakly Labeled Training Data Joseph Bockhorst Mark Craven An epsilon-Optimal Grid-Based Algorithm for Partially Observable Markov Decision Processes Blai Bonet Transformation-Based Regression Bjorn Bringmann Stefan Kramer Friedrich Neubarth Hannes Pirker Gerhard Widmer A New Statistical Approach on Personal Name Extraction Zheng Chen Feng Zhang Learning Decision Rules by Randomized Iterative Local Search Michael Chisholm Prasad Tadepalli IEMS - The Intelligent Email Sorter Elisabeth Crawford Judy Kay Eric McCreath Exact model averaging with naive Bayesian classifiers Denver Dash Gregory Cooper Anytime Interval-Valued Outputs for Kernel Machines: Fast Support Vector Machine Classification via Distance Geometry Dennis DeCoste Action Refinement in Reinforcement Learning by Probability Smoothing Thomas Dietterich Didac Busquets Ramon Lopez de Mantaras Carles Sierra Integrating Experimentation and Guidance in Relational Reinforcement Learning Kurt Driessens Saso Dzeroski Is Combining Classifiers Better than Selecting the Best One? Saso Dzeroski Bernard Zenko Fast Minimum Training Error Discretization Tapio Elomaa Juhu Rousu Learning Decision Trees Using the Area Under the ROC Curve Cesar Ferri Peter Flach Jose Hernandez-Orallo Numerical Minimum Message Length Inference of Univariate Polynomials Leigh Fitzgibbon David Dowe Lloyd Allison Multi-Instance Kernels Thomas Gaertner Peter Flach Adam Kowalczyk Alex Smola Robert Williamson An Analysis of Functional Trees Joao Gama Descriptive Induction through Subgroup Discovery: A Case Study in a Medical Domain Dragan Gamberger Nada Lavrac On generalization bounds, projection profile, and margin distribution Ashutosh Garg Sariel Har-Peled Dan Roth Combining Labeled and Unlabeled Data for MultiClass Text Categorization Rayid Ghani Hierarchically Optimal Average Reward Reinforcement Learning Mohammad Ghavamzadeh Sridhar Mahadevan Sufficient Dimensionality Reduaction - A novel Analysis Principle Amir Globerson Naftali Tishby A Unified Decomposition of Ensemble Loss for Predicting Ensemble Performance Michael Goebel Pat Riddle Mike Barley Graph-Based Relational Concept Learning Jesus Gonzalez Lawrence Holder Diane Cook Algorithm-Directed Exploration for Model-Based Reinforcement Learning Carlos Guestrin Relu Patrascu Dale Schuurmans Coordinated Reinforcement Learning Carlos Guestrin Michail Lagoudakis Ronald Parr Discovering Hierarchy in Reinforcement Learning with HEXQ Bernhard Hengst Classification Value Grouping Colin Ho Linkage and Autocorrelation Cause Feature Selection Bias in Relational Learning David Jensen Jennifer Neville Approximately Optimal Approximate Reinforcement Learning Sham Kakade John Langford An Alternate Objective Function for Markovian Fields Sham Kakade Yee Whye Teh Sam Roweis Interpreting and Extending Classical Agglomerative Clustering Algorithms using a Model-Based approach Sepandar Kamvar Dan Klein Christopher Manning Kernels for Semi-Structured Data Hisashi Kashima Teruo Koyanagi A Fast Dual Algorithm for Kernel Logistic Regression Sathiya Keerthi Kaibo Duan Shirish Shevade Aun Poo From Instance-level Constraints to Space-Level Constraints: Making the Most of Prior Knowledge in Data Clustering Dan Klein Sepandar Kamvar Christopher Manning Diffusion Kernels on Graphs and Other Discrete Structures Risi Kondor John Lafferty Learning the Kernel Matrix with Semi-Definite Programming Gert Lanckriet Nello Christianini Peter Bartlett Laurent El Ghaoui Michael Jordan Competitive Analysis of the Explore/Exploit Tradeoff John Langford Martin Zinkevich Sham Kakade Combining Trainig Set and Test Set Bounds John Langford Inducing Process Models from Continuous Data Pat Langley Javier Sanchez Ljupco Todorovski Saso Dzeroski Reinforcement Learning and Shaping: Encouraging Intended Behaviors Adam Laud Gerald DeJong Cranking: An Ensemble Method for Combining Rankers using Conditional Probability Models on Permutations Guy Lebanon John Lafferty Learning to Share Distributed Probabilistic Beliefs Christopher Leckie Ramamohanarao Kotagiri The Perceptron Algorithm with Uneven Margins Yaoyong Li Hugo Zaragoza Ralf Herbrich John Shawe-Taylor Jaz Kandola Partially Supervised Classification of Text Documents Bing Liu Wee Sun Lee Philip S. Yu Xiaoli Li Feature Selection with Active Learning Huan Liu Hiroshi Motoda Lei Yu Investigating the Maximum Likelihood Alternative to TD(lambda) Fletcher Lu Relu Patrascu Dale Schuurmans A Necessary Condition of Convergence for Reinforcement Learning with Function Approximation Artur Merke Ralf Schoknecht Towards "Large Margin" Speech Recognizers by Boosting and Discriminative Training Carsten Meyer Peter Beyerlein Learning word normalization using word suffix and context from unlabeled data Dunja Mladenic Active + Semi-supervised Learning = Robust Multi-View Learning Ion Muslea Steven Minton Craig Knoblock Adaptive View Validation: A First Step Towards Automatic View Detection Ion Muslea Steven Minton Craig Knoblock Stock Trading System Using Reinforcement Learning with Cooperative Agents Jangmin O Jae Won Lee Byoung-Tak Zhang Learning k-Reversible Context-Free Grammars from Positive Structural Examples Tim Oates Devina Desai Vinay Bhat MMIHMM: Maximum Mutual Information Hidden Markov Models Nuria Oliver Ashutosh Garg Learning Spatial and Temporal Correlation for Navigation in a 2-Dimensional Continuous World Anand Panangadan Michael Dyer A Boosted Maximum Entropy Model for Learning Text Chunking Seong-Bae Park Byoung-Tak Zhang On the Existence of Fixed Points for Q-Learning and Sarsa in Partially Observable Domains Theodore Perkins Mark Pendrith Learning from Scarce Experience Leonid Peshkin Christian Shelton Automatic Creation of Useful Macro-Actions in Reinforcement Learning Marc Pickett Andrew Barto Using Unlabelled Data for Text Classification through Addition of Cluster Parameters Bhavani Raskutti Adam Kowalczyk Herman Ferra Using Abstract Models of Behaviours to Automatically Generate Reinforcement Learning Hierarchies Malcolm Ryan Syllables and other String Kernel Extensions Craig Saunders Hauke Tschach John Shawe-Taylor Incorporating Prior Knowledge into Boosting Robert Schapire Marie Rochery Mazin Rahim Narendra Gupta Modeling Auction Price Uncertainty Using Boosting-based Conditional Density Estimation Robert Schapire Peter Stone David McAllester Michael Littman Janos Csirik How to Make Stacking Better and Faster While Also Taking Care of an Unknown Weakness Alexander K. Seewald Model-based Hierarchical Average-reward Reinforcement Learning Sandeep Seri Prasad Tadepalli Separating Skills from Preference: Using Learning to Program by Reward Daniel Shapiro Pat Langley Discriminative Feature Selection via Multiclass Variable Memory Markov Model Noam Slonim Gill Bejerano Shai Fine Naftali Tishby Learning to Fly by Controlling Dynamic Instabilities David Stirling Randomized Variable Elimination David Stracuzzi Paul Utgoff Markov Chain Monte Carlo Sampling using Direct Search Optimization Malcolm Strens Mark Bernhardt Nicholas Everett Qualitative reverse engineering Dorian Suc Ivan Bratko Finding an Optimal Gain-Ratio Subset-Split Test for a Set-Valued Attribute in Decision Tree Induction Fumio Takechi Einoshin Suzuki Refining the Wrapper Approach - Smoothed Error Estimates for Feature Selection Loo-Nin Teow Hwee Tou Ng Haifeng Liu Eric Yap Sparse Bayesian Learning for Regression and Classification using Markov Chain Monte Carlo Shien-Shin Tham Arnaud Doucet Ramamohanarao Kotagiri On the Issues of Classifier Evaluation using Cost Space Representation Kai Ming Ting Modeling for Optimal Probability Prediction Yong Wang Ian H. Witten Mining Both Positive and Negative Association Rules Xindong Wu Shichao Zhang Non-Disjoint Discretization for Naive-Bayes Classifiers Ying Yang Geoffrey I. Webb Representational Upper Bounds of Bayesian Networks Huajie Zhang Charles Ling Content-Based Image Retrieval Using Multiple-Instance Learning Qi Zhang Wei Yu Sally Goldman Jason Fritts Statistic Behavior and Consistency of Support Vector Machines, Boosting, and Beyond Tong Zhang Pruning Improves Heuristic Search for Cost-Sensitive Learning Valentina Bayer Zubek Thomas Dietterich