Artificial Intelligence (AI) plays an increasing role in health and medicine. We’ve written about AI applications for faster MRI scan readings, stroke triage, virtual health consults, and many more. Applying AI to big data assists with diagnosis, makes predictions, shapes care plans, and monitors post-surgical patient activity. The potential for AI to improve global healthcare appears boundless at this juncture, but unleashing such powerful technology on the world’s population raises questions of privacy, data security, prioritization in many forms, and even science fiction’s cash cow: robots that turn on their makers.

The European Commission recently announced a pilot program beginning this summer to assess whether its AI ethics guidelines are workable. The EU AI ethics guidelines have three elements: defining requirements for trustworthy AI, a large-scale pilot to gather feedback, and building an international consensus for “human-centric AI.” The Commission has identified seven essential elements for trustworthy AI (in addition to obeying and complying with applicable laws and regulations).

The Seven Essentials are human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; societal and environmental well-being; and accountability.

By its very nature, artificial intelligence does not always reveal how it reaches its decisions. Precisely because AI has potential applications in most major sectors including healthcare, energy consumption, vehicle safety, farming, climate change, and financial risk management. It also is an important component for cybersecurity, fraud detection, and law enforcement. The EU’s forward-thinking standards approach based on a global consensus is a responsible attempt to deal with potential problems before they become major problems.