Implementation of Artificial Intelligence in Nuremberg Hospital. An Overview.

Nuremberg Hospital is one of the largest municipal hospitals in Europe and a maximum care hospital. With its 8,400 employees, it treats around 335,000 inpatients and outpatients every year across its four sites in Nuremberg, Lauf, and Altdorf. The Group also includes Akademie Klinikum Nürnberg and Paracelsus Medical Private University Campus Nuremberg (PMU).

Dr. Manfred Criegee-Rieck holds a doctorate in mathematics and, as a medical IT specialist at Nuremberg Hospital, is responsible for the requirements for implementing AI. We spoke to him about the challenges of using AI in a hospital:

 

In which areas are you already using AI at Nuremberg Hospital?

The most frequently mentioned areas of application for AI in hospitals include medical imaging and radiology, to name just two typical areas. One of our first steps in 2020 was the use of artificial intelligence in the diagnosis of skin diseases. The two skin diseases, ulcus cruris, and the less common pyoderma gangraenosum, look very similar but are treated differently. The aim here is to achieve a clear classification to minimize the risk of confusion. The Manfred Roth Foundation provided welcome financial support at the time.

In addition to medicine, there is also the area of information security and data protection, which we aim to improve using AI-supported processes. In the AI4HealtSec project funded by the European Commission, our goal, as one of 15 project partners, is to optimize the timely detection of cyber attacks on infrastructure and to optimize risk awareness. In addition to vulnerability and risk analysis in technical hospital infrastructure, modern methods of deception, e.g., social engineering, are being further developed with AI support in terms of prevention and risk defense.

 

Are there other typical use cases for the application of AI that you see in the near future?

Medical documentation as an overall process currently ties up a lot of doctors’ and nurses’ time and manpower in hospitals, which does not benefit the patient. In my view, the dictation and digital speech recognition currently in use are only interim solutions that provide selective and limited relief but do not sufficiently automate the overall process. What is needed here is an AI that records spoken language and processes it in such a way that grammatically correct and fluently legible texts are created, which are required for doctor’s letters, surgical reports, or findings, as well as providing case-related coding of all billable services for a treatment case. The challenge that patient records do not contain all the necessary and revenue-relevant information could be solved with such case-related AI, avoiding negative liquidity effects by ensuring all treatment cases can be coded and billed in a timely manner. This would contribute to a significant increase in efficiency.

 

Is the use of AI still “experimental” or does it already bring measurable added value in terms of increasing efficiency at Nuremberg Hospital?

The question is too generalized. Specific AI solutions produce a quantifiable benefit for certain, definable issues because they can outperform human performance, e.g., in the analysis and classification of medical images. We can certainly speak of quantifiable added value here today. As far as efficiency gains through AI at the hospital are concerned, there is no reliable information available. For medical use cases with AI, we mostly read about research or evaluation projects or the potential of AI technologies, demonstrating their experimental nature. I am not aware of an off-the-shelf AI solution for medicine, so such tools are still a long way from being used in everyday healthcare.

 

Can a hospital use cloud-based AI?

Yes, that is indeed possible. As with many other scenarios, the legal framework conditions must be complied with, and data protection stands out in the case of cloud services. A cloud-based AI that processes medical data on behalf of a hospital in compliance with data protection regulations will find customers and buyers. AI providers who have understood the concept of privacy by design as a requirement for their cloud-based service from the outset will find it easier. In my opinion, the strategy of presenting the protection of personal rights as an obstacle to innovative medicine using AI and calling for an amendment to data protection law is not expedient.

 

What hurdles and obstacles are there to the use of AI in hospitals?

Firstly, the regulatory requirements for the use of AI in hospitals and beyond in the healthcare sector must be established. This cannot be done in the hospital itself. The processing of health data, in particular, is a highly sensitive area in which legal and ethical questions also arise. Additionally, the principles of modern, evidence-based diagnostics and therapy must be taken into account in AI-supported procedures. Simply put, poor-quality learning data does not lead to high-quality results when using AI. Ensuring patient safety, treatment effectiveness, and quality is paramount. If these questions remain unanswered, there is a lack of certainty for action and investment.

As far as the use of AI in the hospital workplace is concerned, I am optimistic, as many employees are even demanding its use in the company due to their positive experiences with AI in their private lives.

 

Have you already developed customized AI solutions for Nuremberg Hospital?

As mentioned earlier, we are currently developing an AI that has learned to reliably distinguish between the two skin diseases ulcus cruris and the rarer pyoderma gangraenosum, also known as ulcerative dermatitis, as both look similar but need to be treated differently. Wound cleansing, as is common with leg ulcers, can in the worst case lead to amputation in patients with ulcerative dermatitis. The need for correct and reliable classification from the outset is therefore entirely in line with the medical principle of “primum non nocere, secundum cavere, tertium sanare” (first, do no harm; second, be careful; third, heal).

 

Are there partnerships with AI start-ups such as OpenAI, Cohere, or Anthropic?

No, not at the moment. However, we are monitoring the market and are happy to welcome any potential partner who has already familiarized themselves with the regulatory and ethical framework conditions in our industry in advance so that we can discuss a specific use case promptly—without having to go to the trouble of finding one first. We also have a significant number of employees across all professional groups who are very intensively involved in innovative improvements in their private lives, made possible by LLMs such as ChatGPT or RPA (robot-controlled process automation), and who actively introduce, promote, and demand this in their professional context.

 

Is Nuremberg Hospital developing its own LLM (Large Language Model) or are you interested in doing so?

As mentioned earlier, we would be very interested in an LLM that makes the overall process of medical documentation within our system environment prompt and efficient. I only see in-house development as part of a funded project in which my hospital’s own technical contribution is adequately taken into account.

 

Are there any reservations about the use of AI in your organization?

There are certainly reservations, and the key question is how to deal with them. Some employees have a healthy skepticism and insist on seeing the benefits. Others, who have studied AI in more detail, also see the risks of unconsidered use or the ethical challenges of this new category of tools. Still, others are very specifically concerned about the competitive situation resulting from the transfer of activities to an AI. It is important to act openly and transparently and not to dismiss these reservations as unjustified.

With new technologies and methods, a balance must always be struck between sometimes overly euphoric expectations and realistic results. This doesn’t just apply to my organization, and we are on the way there.

 

How can the use of AI be reconciled with the strict German Patient Data Protection Act?

The primary aim of the PDSG is for insured persons to have an electronic patient record (ePA) and thus become masters of their medical data and information. This means that in addition to medical reports, findings, or X-rays, medication taken, electronic prescriptions, vaccination records, or other medical documents are stored in one place, and the patient alone decides what happens to their data. In each individual case, the patient controls who can access their ePA and for what purpose. This is done via an app on their smartphone or tablet. The ePA is accessed within the telematics infrastructure by doctors, hospitals, or pharmacists, and if an AI process is to be used there, the patient must be informed and their consent obtained. If the patient gives this consent and all data protection criteria, such as purpose limitation in the AI procedure, are met, the use of AI should be possible without any problems.

 

Are there any fears that patient data could be leaked through the use of AI?

Every user of an ePA within the underlying telematics infrastructure, be it a doctor, hospital, or pharmacist, is responsible for protecting the patient data they process. Each of these users must ensure that no patient data is leaked or can leave the telematics infrastructure. To this end, the provider of a software solution that processes patient data using AI must be contractually obliged by the user to create the technical and, if necessary, organizational conditions to ensure that such leakage is not possible. Such fears are not unfounded, especially if, for example, technical and organizational security measures are not taken for reasons of time or cost.

 

What wishes do you have with regard to AI? What should it do better for patients and hospitals?

AI-based tools must make all data-processing processes in the hospital more efficient and always remain safe to use, trustworthy, and binding for patients and medical staff. My immediate goals for AI would be to relieve the burden on care and administrative staff as quickly as possible. More time at the bedside is of direct benefit to patients, as are faster planning times in administration, e.g., for the optimal staffing of a ward. The first focus for AI must be on the time wasters of documentation, administrative bureaucracy, and planning. The second step would be medical applications such as more precise diagnoses, faster evaluation of complex constellations of findings (multimorbidity, polypharmacy), and greater efficiency in the individualization of therapy with better effectiveness while at the same time protecting the patient.

“First, do no harm; second, be careful; third, heal.”

Join the waiting list.

By signing up, you agree to the Privacy Policy.

Cookie Consent with Real Cookie Banner