British Government Investment in AI and Military Drones
Faculty AI, a UK-based AI startup, through the National Health Service (NHS), is currently developing AI technologies for military drones.
The British AI consultancy is known for its connection with the UK government, and has played a significant role in the UK’s AI sector.
The services it offers spreads across various industries. They have been recognized as one of the most vibrant AI service companies in the United Kingdom.
Although Faculty AI has proven itself as an important part of the British AI sector, its defense-related work has raised concerns.
There are policy and ethical concerns about the possible use of autonomous drones in military implementations.
Faculty AI became popular in the UK after its work in data analysis for the Vote Leave Campaign.
Global governments are racing to access the safety risks associated with AI, thanks to the swift advancement in generative AI.
The Role of Faculty AI in Defense
A Defense industry partner opined that Faculty has “experience developing and deploying AI models on to Unmanned Aerial Vehicles (UAVs).”
Subject identification, tracking object movement and exploring autonomous swarming capabilities are all a part of this work.
The company, Faculty Science, is also in partnership with London-based fledgling Hadean. This collaboration helps to highlight its contribution in facilitating drone technologies.
Weapon companies are exploring the possibilities of integrating AI into drones. This includes “loyal wingmen” drones that could accompany fighter jets and loitering munitions that can already wait for targets to emerge before firing.
Faculty confirmed that its work with Hadean does not involve weapons targeting. But, Faculty remained silent when asked if it was working on drones that could apply lethal force.
It also decided to maintain confidentiality concerning details of its defense work.
The company’s spokesperson defended its operation saying, “We help to develop novel AI models that will help our defense partners create safer, more robust solutions. We have rigorous ethical policies and internal processes and follow ethical guidelines on AI from the Ministry of Defense.”
“We’re trusted by governments and model developers to ensure frontier AI is safe, and by defense clients to apply AI ethically to help keep citizens safe.” The spokesperson added, emphasizing Faculty’s ten years expertise in AI safety, including countering terrorism and child abuse.
The representative from Faculty reiterated the company’s dedication to upholding ethical standards: “We’ve worked on AI safety for a decade and are world-leading experts in this field. That’s why we’re trusted by governments and model developers.”
Leave a Reply