Programming with LLMs in R & Python

Working with LLMs in Practice
Large Language Models are becoming part of everyday data science work. But using them through chat interfaces is only one part of the picture.
In this upcoming webinar, we focus on how to work with LLMs programmatically, using R and Python to integrate them into real workflows and applications.
Secure your place by registering through the webinar registration form
What We’ll Cover
We begin with a short introduction to how LLMs work, including how they are priced, where they perform well, and where they can fall short.
From there, the session moves into practical examples of working with LLMs in code:
- Sending prompts to an LLM API from R using the {ellmer} package
- Including additional instructions through system prompts
- Structuring prompts to return clean, tabular outputs
- Summarising images and PDFs using LLMs
While the examples will focus primarily on R, we will also briefly explore the {chatlas} package for Python, which offers similar functionality.
Why This Matters
Using LLMs through chat tools is useful for exploration, but it has limits.
For data scientists and developers, the value comes from:
Automating repetitive tasks
Embedding LLMs into applications and pipelines
Generating structured outputs that can be reused downstream
This webinar focuses on that shift, from interactive use to integration in code.
Who Should Attend
This webinar is suitable for:
Data scientists working with R or Python
Developers interested in integrating AI into applications
Teams exploring how to move from experimentation to production
No prior experience with LLM APIs is required, but familiarity with R or Python will be helpful.
Webinar Details
- Date: 23rd April 2026
- Time: 1:15 PM (UK time)
- Location: Online
- Cost: Free
Speaker
The session will be led by Myles Mitchell, Principal Data Scientist at Jumping Rivers.
Related Jumping Rivers Training
If you would like to explore these topics further, our 6-hour course, LLM-Driven Applications with R and Python covers:
- Building LLM-powered dashboards
- Setting up retrieval-augmented generation (RAG) pipelines
- Responsible use of AI
Join Us
LLMs are quickly becoming part of the standard toolkit for data science.
Understanding how to use them programmatically opens up far more possibilities than using them through chat alone.
This session is designed to give you a clear starting point.
