Skip to main content
items
jumping rivers
Menu
  • Home 
  • About
    • Overview 
    • Join Us 
    • Community 
    • Conferences 
    • Contact 
  • Training
    • Overview 
    • Course Catalogue 
    • Public 
  • Posit
    • Overview 
    • License Resale 
    • Managed Services 
    • Health Check 
  • Data Science
    • Overview 
    • Visualisation and Dashboards 
    • Open-Source Data Science 
    • Data Science as a Service 
  • Data Engineering
    • Overview 
    • Cloud Solutions 
    • Enterprise Applications 
  • Our Work
    • Blog 
    • Case Studies 

StanCon 2018 Highlights

Published: January 24, 2018

tags: r, stan, stancon,

This year we had the privilege of sponsoring StanCon. Unfortunately, we weren’t able to actually attend the conference. Rather than let our ticket go to waste, we ran a small competition, which Ignacio Martinez won with his very cool (but in alpha stage) R package.


Do you use Professional Posit Products? If so, check out our managed Posit services


Highlights from StanCon 2018

During my econ PhD I learned a lot about frequentist statistics. Alas, my training of Bayesian statistics was limited. Three years ago, I joined @MathPolResearch and started delving into this whole new world. Two weeks ago, thanks to @jumping_uk, I was able to attend StanCon. This was an amazing experience, which allowed me to meet some great people and learn a lot from them. These are my highlights from the conference:

You’d better have a very good reason to not use hierarchical models. Ben Goodrich’s tutorial on advanced hierarchical models was great. Most social science data has a natural hierarchy and modeling it using Stan is easy! Slides for this three day tutorial are available here: [day 1, day 2, day 3].

Everyone should take his or her model to the loo. @avehtari’s excellent tutorial covered cross-validation, reference predictive and projection predictive approaches for model assessment, selection and inference after model selection. This tutorial is available online, and everyone using Stan should do it.

Bob Carpenter‘s tutorial on how to verify fit and diagnose convergence answered many practical and theoretical questions I had. Bob did a great job explaining how the effective sample sizes and potential scale reduction factors (’R hats’) are calculated. He also gives us some practical rules:

  • We want R hat to be less than 1.05 and greater than 0.9
  • R hat equal to 1 does not guarantee convergence
  • An effective sample size between 50 and 100 is enough
  • Don’t be afraid to ask questions on the Stan forum

The Bayesian Decision Making for Executives and Those who Communicate with Them series by Eric Novik and Jonathan Auerbach had some very good advice:

  • Before model building, ask: What decisions are you trying to make? What is the cost of the wrong decision? What is the gain from a good decision?
  • During model building: Elicit enough information about the problem so that a generative model can be expressed. This is very hard. A lot depends on the industry (e.g., book publishers are very different from pharma companies).
  • After the model has been fit: Communicate the results so stakeholders can make a decision. Some things to keep in mind when doing so include:
    • Stakeholders should not care about p-values, Bayes factors or ROC curves (but sometimes do).
    • Stakeholders should care about the uncertainty in your estimates, but often they do not.
    • Stakeholders should know their loss or utility function, but they often do not.

To sum this up, the Stan developers are an incredibly talented and generous group of people that have created a useful and flexible programing language and a fantastic community around it. I look forward to future StanCons. A few other things that I am looking forward to in the nearer future (and I underStand are coming soon…):

  • A series of Coursera massive open online courses (MOOCs)
  • Support for parallel computing with MPI and GPUs
  • loo 2.0

Jumping Rivers Logo

Recent Posts

  • Analysing Shiny App start-up Times with Google Lighthouse
  • Using Google Lighthouse for Web Pages
  • Training Lineup for 2024: January-June
  • Getting started with theme()
  • Python Virtual Environments and Barbie
  • SatRdays London 2024
  • Sluggish system or client code?
  • Highlights from Shiny in Production (2023)
  • An Introduction to Python Package Managers
  • Shiny in Production: Sponsors

Top Tags

  • r (168)
  • python (55)
  • shiny (40)
  • tidyverse (16)
  • conferences (15)
  • training (12)
  • packages (11)
  • events (10)
  • stan (9)
  • graphics (8)

Follow Us

  • GitHub
  • Twitter
  • LinkedIn
  • YouTube
  • Eventbrite

Find Us

The Catalyst

Newcastle Helix

Newcastle, NE4 5TG

Get Directions

Contact Us

  • hello@jumpingrivers.com
  • + 44(0) 191 432 4340

Newsletter

Sign up

Events

  • North East Data Scientists Meetup
  • Leeds Data Science Meetup
  • SatRdays London 2024
  • Shiny in Production 2024
British Assessment Bureau, UKAS Certified logo for ISO 9001 - Quality managment British Assessment Bureau, UKAS Certified logo for ISO 27001 - Information security management

©2016 - present. Jumping Rivers Ltd

  • Privacy Notice
  • |
  • Booking Terms