IMSH 2026: Artificial Intelligence Products and Real Impact

IMSH 2026: Artificial Intelligence Products and Real Impact

This year's IMSH 2026 was, unsurprisingly, a brilliant experience. That feeling when you fly across the ocean, full of anticipation, to be there when the gates open to the largest conference on medical simulation, simply never gets old. And just like every year, I was looking forward to the exhibition area the most, because that is precisely where certain part of the directions of our field is revealed.

As soon as I entered the hall, it was clear that this year was different. The exhibition was richer, larger, and more spectacular than ever before. The stands shone with colours and lights, and looking at the production, I just sighed to myself: "This must have cost a fortune..." But in a good way. It is tangible proof of how simulation medicine is constantly growing and flourishing.

As an IT specialist focusing on artificial intelligence, I was naturally most interested in the technological innovations, specifically those where AI plays the leading role. And I couldn't start with anyone other than my favorite SIMStation.

SIMStation: Three Innovations That Make Sense

I had heard about what SIMStation had done already before Christmas, but I hadn't seen it until IMSH. This year, they prepared a strong combination for us consisting of one deployment of AI for transcription, two examples of academic collaboration, and one interesting integration. Each of these is very interesting.

SIMStation at the IMSH 2026

SIMStation AI: Transcripts, Topics, and Sentiment Analysis

This is exactly what you would expect from a video-debriefing system when someone mentions the deployment of artificial intelligence. SIMStation has added a new panel to its recording interface where you can see a real-time transcription of the dialogues from the simulation. This is particularly interesting for scenarios involving standardised patients.

What is truly fascinating is that SIMStation went a step further and added sentiment analysis to the transcription. This allows you to track how the atmosphere and emotional tone evolved throughout the simulation. I am very much looking forward to testing this at SIMU as part of our debriefing process. I am curious to see what it will bring to our debriefers since we all know that debriefing is primarily a pedagogical skill and everything else is just a set of tools to potentially facilitate better and better learning outcomes.

I am genuinely eager to see how SIMStation tackles the inevitable hurdles that come with this technology. The biggest challenge will undoubtedly be speaker diarization, which is the ability to distinguish between multiple speakers. This remains a significant state-of-the-art problem. Achieving accurate transcription in hectic, life-threatening scenarios is a massive task and I can't wait to see if they can truly crack it.

Patient Monitor Recognition

I must say that this feature really excited me. SIMStation comes with an elegant solution to an age-old problem of how to integrate the myriad of vital signs monitors from various manufacturers directly into their system. This is also an academic project that shows the bridge between research and practical application.

When they solved this problem years ago at the recording level, they chose the path of hardware encoders. That was clever for universality because you can connect anything, but it had one catch in that you only got a video image into the system. There were no data or values, just the image.

This year's novelty changes that. They introduced a tool based on AI, specifically computer vision, which is trained on specific monitor layouts and can read individual values directly from the video feed.

Why is this so interesting from an IT perspective?

Independence: Compatibility with any hardware remains preserved, but a "smart" layer is added on top of the video, decoding the image into data.

Efficiency: According to the developers, the entire process runs on a standard CPU. This is not some resource-hungry LLM, but a purpose-built machine learning model. And those can be very gentle on hardware resources.

This technology opens the door to many uses. Automatic annotations (logging when saturation dropped or heart rate rose) are just the beginning. I see it primarily as a robust foundation for AI-assisted debriefing. But until we get there, I will be satisfied "just" to finally see vital signs trends directly in the SIMStation debriefing application. Or is that still music of the future? We shall see.

Academic Project: SIMplifEYE

While the previous two innovations are soon-to-be-products, the third point of the presentation was a glimpse into the research laboratory at the SIMplifEYE project.

It involves the use of advanced computer vision and machine learning to analyse events in the room. The AI is tasked with recognising actions, such as administering medication or performing CPR, and automatically recording them on a timeline.

What is the point of all this? This project isn't just about computer vision, and it isn't just about simulation medicine. It is a perfect example of how simulation medicine serves as a gateway for clinical solutions.

Let's take the aforementioned detection of CPR quality from video. In a simulation room, it sounds interesting, but honestly - there we most likely have a smart manikin that sends us precise data on compression depth and frequency directly. We don't necessarily need such a solution in simulation, maybe.

But what about a real clinical environment? There are no sensors inside the patient. If this technology can be developed and tested in the safe environment of simulation, its subsequent deployment in real wards could be revolutionary. And that is precisely where I see the greatest strength of similar academic projects.

Third-Party System Integration: Virtual Patient in SIMStation

The second feature SIMStation introduced is the integration of the PCS.AI system. And here I would like to pause for a moment at the word integration itself.

It is fascinating to watch how previously closed ecosystem is beginning to open up. SIMStation, which traditionally prides itself on its own "in-house" solutions, has integrated third-party technology into its product. Is this the start of a new trend? Will we soon see the ability to control other external tools from SIMStation?

For those who don't know PCS.AI (see below), in short, it is an advanced engine that forms the brain of an AI virtual patient. SIMStation utilised their technology, and thanks to this, you can now control the voice and reactions of this virtual patient directly in the application's control window. Simulation participants speak to the manikin, and the PCS.AI system answers them.

Where is the added value of SIMStation? The answer lies in the infrastructure. SIMStation supplies the hardware - professional microphones, speakers, and sound matrices. The virtual patient from PCS running inside SIMStation can thus fully utilise this professional audio equipment. Also, the VP gets controlled from a single software interface. And who knows, perhaps we will soon see this entire conversation automatically transcribed into the debriefing as well. Which is yet another topic, see the video at the beginning of the chapter.

PCS.AI and Simconverse: A Pair of Successful Virtual Patients

Today, that magical acronym AI is omnipresent. Sometimes it gives one a headache. I admit that, much like the hype around VR/XR, I maintain a healthy scepticism towards AI. In short, there are applications where large language models are brilliant, and applications where they simply do not belong. After all, I often speak on this topic.

However, where LLMs achieve absolutely excellent results is in conversations. We have all heard that current models have passed the Turing test in written text. And honestly, I didn't think I would live to see the day. Today, however, these models are extremely powerful even in spoken language accompanied by video. And this is exactly what products like PCS.AI or Simconverse utilise, bringing the concept of AI standardised patients.

We all know that standardised patients (SP) are a complex topic in medical education. They are demanding organisationally (you have to find actors), knowledge-wise (you have to train them on the scenario), and of course financially. Can AI be the solution? To a certain extent, yes. I tried these products at IMSH, and I must admit they were very interesting.

Looking at the specific solutions, we see two distinct but complementary approaches. PCS.AI bets on maximum fidelity and integration. Their engine doesn't just power avatars on a screen but acts as the "brain" for physical manikins (e.g., the Alex model) or can be connected to existing ones using the smart SimVox speaker. It is about the depth of experience and emotions directly in the simulation centre. On the other hand stands Simconverse, which defines flexibility. It is a purely cloud-based (SaaS) platform that requires no special hardware. It targets massive communication practice. A student simply opens a browser, puts on headphones, and can train with dozens of different patients until they gain confidence.

Real Impact: I see a huge opportunity here to practise conversational scenarios in particular, and comfortably from home. Students can practise taking medical history or breaking bad news repeatedly, without the stress of being evaluated by an instructor and at a time that suits them. That is scalability you simply cannot achieve with live actors.

PCS.AI at the IMSH 2026

SimConverse at the IMSH 2026

Academic Alternative: Comenius University

We don't have to look only to the big commercial players. The excellent work of our colleagues from Comenius University in Bratislava is worth mentioning. They used artificial intelligence to create videos with standardised patients.

In their case, these are currently pre-recorded videos in which patients describe, for example, symptoms of specific diseases. Even this form, however, has proven to be very effective in teaching. And from what I heard from colleagues, they are currently playing with conversational AI models to enable real, dynamic dialogues. It is great to see that innovations comparable to the world stage happening at simulation centres as well.

Oxford Medical Simulation: When VR Meets AI

I admit that I wanted to write briefly about OMS (Oxford Medical Simulation) last year when I first saw their product.

When it comes to the use of virtual reality (VR) in education, I generally belong to the more sceptical voices. I always ask critically: Does it really bring the appropriate fidelity? Can it not be trivially and cheaply replaced by a physical environment? And does the given VR solution bring something truly extra?

My biggest problem with VR simulations has always been communication. As long as we "conversed" with patients by selecting pre-prepared sentences from a floating menu and clicking on them with a controller, it wasn't a simulation. It was a video game. Clicking on text is simply not immersive.

At OMS, I really liked the connection of VR and AI (LLM) this year for voice processing and generating reactions directly in the virtual environment. Yes, there is still that slight delay we all know (before the cloud "crunches" your sentence and generates an answer), but otherwise, it was exactly that missing piece. You put on the headset, stand by the bed, and speak to the patient. You ask them questions, reassure them, and they answer. That is the moment when VR finally starts to make sense for soft-skills training too.

OMS are of course not the only ones; everyone who is serious about VR is implementing this trend, but at IMSH they served as a great, representative example of where technology has moved.

Oxford Medical Simulation at the IMSH 2026

Academic Footprint: Project at Masaryk University (SIMU)

But lest we only praise foreign companies, I must mention that we at SIMU are also intensively dealing with this topic. We are partners in a Czech TAČR project which focuses on exactly this area – connecting a VR application with artificial intelligence for training difficult communication.

Why communication specifically? The data speaks clearly. Over 60% of complaints in healthcare facilities concern inappropriate communication by staff. Burnout syndrome in doctors is often related to stress in emotionally charged situations, such as breaking bad news or announcing a death. And this is where we see huge potential.

In our project, on which we collaborate with other Czech experts, we are creating a virtual patient controlled by AI. A healthcare professional puts on a headset and finds themselves in a situation where they must announce, for example, a death after a car accident. The patient (AI) reacts to what the professional says, expresses emotions, and asks unpleasant questions.

The goal is not to replace teaching with actors, but to enable safe, repeatable training. A student can try "dry runs" of what it is like to face a torrent of emotions, and realise that intuition is often not enough and that professional communication protocols need to be followed. It is great to see that solutions comparable to those we see at IMSH are also being created in our laboratories.