How we know what is likely to be true?

Title                          How we know what is likely to be true?

Teaching staff          Ellen Evers, Job van Wolferen, Daniel Lakens

Date                         September 19th 2013

Type of course         Basic Workshop

Duration                   One-day meeting

Language                 English

Content                    In science it is impossible to know for a fact that a general theory is true or not true, the methods we use can only help us to understand how likely something is to be true or not true. However, interpreting results and estimating these likelihoods is sometimes difficult.  The aim of this day is to gain more skill in interpreting your own and other people's findings to be able to better judge whether theories and findings are likely to be true.

Regarding other people's work, you'll learn about tools and interpretations of data that help you decide which literature to expand on with your own research, how to interpret inconsistencies between theories, which papers to spend time on, and which to ignore. Regarding your own work, this day will help you design better experiments that provide more information, give you more insight into which null findings are true nulls vs. type 2 errors, and which significant results are true results vs. type 1 errors.

 

 

A tentative outline of the day;

A small side-step into theory-formation and interpretation (characteristics of a good theory)

Bayesian thinking (p-values are essentially likelihoods that need Bayesian interpretations to be meaningful)

What is a p-value and how should we interpret them (false-positives / p-curve / Bayesian likelihoods)

Why power matters

Simulation as a tool to answer stats-questions

Scaling models of cognition to the real world

Teaching staff          Iris van Rooij, Johan Kwisthout, Todd Wareham & Mark Blokpoel

Date                         October 24th 2013

Type of course         Theoretical Workshop

Duration                   One-day meeting

Language                 English

Content                    A common property of computational- or rational-level models of cognition is that the cognitive capacities that they postulate are computationally intractable (e.g., NP-hard). Formally, this means that the computations that these models postulate require exponential time. This characteristic seems unproblematic when modeling laboratory-scale cognition, because there cognition is confined to relatively simple situations. Yet, the intractability of a cognitive model means that the computations it postulates do not scale in any obvious way to explain how cognitive capacities can operate in complex real world situations outside the lab. How can cognitive scientists overcome this explanatory obstacle?

In this workshop, participants will learn useful techniques from theoretical computer science for identifying model parameters (i.e., variables that mathematically express properties of the environment and the cognitive capacity) that cause intractability. These techniques can be used to generate hypotheses about how models can be constrained so as to make them computationally tractable with minimal loss of generality, thereby improving their scalability. The tutorial will include illustrations of concrete applications as well as discussions of relevant philosophical issues (e.g., pertaining to approximation, heuristics etc.).

Literature                 Workshop participants are asked to read the following article as preparation for the workshop: Van Rooij (2008). The tractable cognition thesis, Cognitive Science, 32, 939-984.

Other relevant literature:

Blokpoel, M., Kwisthout, J., Wareham, T., Haselager, P., Toni, I., & van Rooij, I. (2011). The computational costs of recipient design and intention recognition in communication. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.

Kwisthout, J., Wareham, T., & van Rooij, I. (2011). Bayesian intractability is not an ailment that approximation can cure. Cognitive Science, 35(5), 779-784.

Kwisthout, J. & van Rooij, I. (2013). Bridging the gap between theory and practice of approximate Bayesian inference. Cognitive Systems Research, 24, 2-8.

Van Rooij, I., Evans, P., Müller, M., Gedge, J. & Wareham, T. (2008). Identifying sources of intractability in cognitive models: An illustration using analogical structure mapping. In B. C Love, K. McRae, and V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society (pp. 915-920).