Skip to main content
AI in Production 2026 is now open for talk proposals.
Share insights that help teams build, scale, and maintain stronger AI systems.
items
Menu
  • About
    • Overview 
    • Join Us  
    • Community 
    • Contact 
  • Training
    • Overview 
    • Course Catalogue 
    • Public Courses 
  • Posit
    • Overview 
    • License Resale 
    • Managed Services 
    • Health Check 
  • Data Science
    • Overview 
    • Visualisation & Dashboards 
    • Open-source Data Science 
    • Data Science as a Service 
    • Gallery 
  • Engineering
    • Overview 
    • Cloud Solutions 
    • Enterprise Applications 
  • Our Work
    • Blog 
    • Case Studies 
    • R Package Validation 
    • diffify  

Why Submit to AI in Production: Speaking as a Tool for Better Work

Author: Gigi Kenneth

Published: January 20, 2026

tags: ai, machine-learning, engineering, events, r

We’re accepting abstracts for AI in Production until 23rd January. The conference takes place on 4th–5th June 2026 in Newcastle, with talks on Friday 5th across two streams: one focused on engineering and production systems, the other on machine learning and model development.

We often hear: “My work isn’t ready to talk about yet” or “I’m not sure anyone would be interested.” We want to address that hesitation directly.

Speaking at a conference isn’t primarily about promoting yourself or your organisation.

It’s a practical tool that helps you do better work. Preparing and delivering a talk forces useful reflection, invites feedback from people facing similar challenges, and turns knowledge that lives only in your head into something your team can reuse.

If you’re wondering whether your work qualifies: internal systems count, work in progress counts, partial success counts.

Submit your abstract by 23rd January on the AI in Production website.

Preparing a Talk Clarifies Your Decisions

When you sit down to explain a technical choice to an audience, you have to answer questions you might have glossed over at the time: Why did we build it this way? What constraints shaped our approach? What would we do differently now?

This isn’t about justifying your decisions to others. It’s about understanding them yourself. The process of turning a production system into a coherent narrative forces you to see patterns you were too close to notice while building it. You identify what worked, what didn’t, and why. That clarity is valuable whether or not you ever give the talk.

Many practitioners find that writing an abstract or outline reveals gaps in their thinking. A deployment strategy that seemed obvious in context becomes harder to explain without it. A monitoring approach that felt pragmatic reveals underlying assumptions. This friction is useful. It means you’re learning something about your own work.

Speaking Invites Useful Feedback

The audience at AI in Production will broadly fall across two streams: engineering (building, shipping, maintaining, and scaling systems) and machine learning (model development, evaluation, and applied ML).

Whether you’re working on infrastructure and deployment or on training pipelines and model behaviour, you’ll be in a room with people facing similar constraints: limited resources, shifting requirements, imperfect data, and operational pressures.

When you share what you’ve tried, you get feedback from people who understand the context. Someone has solved a similar problem differently. Someone has run into the same failure mode. Someone asks a question that makes you reconsider an assumption.

This kind of peer feedback is hard to get otherwise. Your team is too close to the work. Online discussions lack context. A conference talk puts your approach in front of people who can offer informed perspectives without having to understand your entire stack or organisational structure first.

Talks Help Share Responsibility and Knowledge

In many teams, knowledge about production systems sits with one or two people. They know why certain decisions were made, where the edge cases are, and how to interpret the monitoring dashboards. That concentration of knowledge creates risk.

Preparing a talk is a forcing function for documentation. To explain your system to strangers, you have to articulate what’s currently tacit. That articulation becomes something your team can use: onboarding material, decision records, runbooks.

Speaking also distributes responsibility. When you present work publicly, it stops being just yours. Your team shares ownership of the ideas. Others can critique, extend, or maintain them. This is particularly valuable for platform teams or infrastructure work, where the people who built something may not be the ones operating it six months later.

Turning Tacit Knowledge into Reusable Material

Much of what you know about your production systems isn’t written down. You understand the failure modes, the workarounds, and the operational quirks. You know which metrics matter and which are noise. You remember why you made certain tradeoffs.

A conference talk is an excuse to capture that knowledge. The slides become a reference. The abstract becomes a design document. The Q&A reveals what wasn’t clear and needs better documentation.

Even if the talk itself is ephemeral, the process of preparing it leaves artefacts. You’ve already done the hard work of running the system. Speaking about it turns that experience into something others can learn from, and you can build on.

Your Work Is Worth Sharing

If you’re maintaining AI systems in production, you’re solving problems worth talking about. Making models reliable under load, keeping training pipelines maintainable, monitoring behaviour when ground truth is delayed or absent, and managing technical debt while shipping features.

These are the problems practitioners face every day. Your approach won’t be perfect, and that’s the point. Talks about work in progress, about things that didn’t work, about compromises made under constraint are often more useful than polished success stories.

We’re looking for honest accounts of how people are actually building and operating AI systems. That might fit the engineering stream (deployment, infrastructure, monitoring, scaling) or the machine learning stream (training, evaluation, model behaviour, responsible data use). If you’re doing work in either area, you have something to contribute.

Submit an Abstract

The deadline is 23rd January. You’ll need a title and an abstract of up to 250 words. You don’t need a perfect story or a finished project. You need a problem you’ve worked on, some approaches you’ve tried, and some lessons you’ve learned.

Think about what would be useful for someone six months behind you on a similar path. Think about what you wish someone had told you before you started. Think about the conversation you’d want to have with peers who understand the constraints you’re working under.

If you’re not sure where to start, consider writing about one decision that shaped your system, one assumption that turned out to be wrong, or one constraint that changed your design. Good abstracts often start with a specific moment or choice rather than a broad overview.

Ready to submit? The deadline is 23rd January. Share one decision, one lesson, or one constraint from your production work:
https://jumpingrivers.com/ai-production/

If you have questions about whether your work fits the conference, reach out at events@jumpingrivers.com. We’re here to help make this easier.


Jumping Rivers Logo

Recent Posts

  • Why Submit to AI in Production: Speaking as a Tool for Better Work 
  • Retrieval-Augmented Generation: Setting up a Knowledge Store in R 
  • Machine Learning Powered Naughty List: A Festive Jumping Rivers Story 
  • Make Your Shiny Apps Accessible to Everyone – Free Jumping Rivers Webinar! 
  • Creating a Python Package with Poetry for Beginners Part 3 
  • Beginner’s Guide to Submitting Conference Abstracts 
  • Start 2026 Ahead of the Curve: Boost Your Career with Jumping Rivers Training 
  • Should I Use Figma Design for Dashboard Prototyping? 
  • Announcing AI in Production 2026: A New Conference for AI and ML Practitioners 
  • Elevate Your Skills and Boost Your Career – Free Jumping Rivers Webinar on 20th November! 

Top Tags

  • R (241) 
  • Rbloggers (186) 
  • Pybloggers (91) 
  • Python (91) 
  • Shiny (63) 
  • Events (28) 
  • Machine Learning (26) 
  • Training (24) 
  • Conferences (21) 
  • Tidyverse (17) 
  • Statistics (15) 
  • Packages (13) 

Authors

  • Tim Brock 
  • Theo Roe 
  • Colin Gillespie 
  • Russ Hyde 
  • Osheen MacOscar 
  • Sebastian Mellor 
  • Myles Mitchell 
  • Amieroh Abrahams 
  • Shane Halloran 
  • Gigi Kenneth 
  • Keith Newman 
  • Pedro Silva 

Keep Updated

Like data science? R? Python? Stan? Then you’ll love the Jumping Rivers newsletter. The perks of being part of the Jumping Rivers family are:

  • Be the first to know about our latest courses and conferences.
  • Get discounts on the latest courses.
  • Read news on the latest techniques with the Jumping Rivers blog.

We keep your data secure and will never share your details. By subscribing, you agree to our privacy policy.

Follow Us

  • GitHub
  • Bluesky
  • LinkedIn
  • YouTube
  • Eventbrite

Find Us

The Catalyst Newcastle Helix Newcastle, NE4 5TG
Get directions

Contact Us

  • hello@jumpingrivers.com
  • + 44(0) 191 432 4340

Newsletter

Sign up

Events

  • North East Data Scientists Meetup
  • Leeds Data Science Meetup
  • AI in Production
British Assessment Bureau, UKAS Certified logo for ISO 9001 - Quality management British Assessment Bureau, UKAS Certified logo for ISO 27001 - Information security management Cyber Essentials Certified Plus badge
  • Privacy Notice
  • |
  • Booking Terms

©2016 - present. Jumping Rivers Ltd