Artificial Intelligence can help law enforcement and public safety in several ways. Even though there is more data coming to public safety personnel than ever before, believe it or not, there is still a gap in real-time, operational intelligence. Public safety personnel are expected to handle complex, unfolding emergencies at a rapid pace. The nonstop influx of data is nearly impossible to sift through and categorize manually. This causes information gaps, and overburdened staff may miss crucial connections. Today, most agencies have a good handle on reporting and Business Intelligence, but this analysis of data is only available after an incident occurs. AI can help uncover the insights buried in real-time data.
I think the area with the most promise is embedding AI capabilities into core operational systems like computer-aided dispatch. This ‘assistive’ AI augments personnel, giving them a second set of eyes that continuously scan the operational data behind-the-scenes, looking for patterns, similarities, trends and anomalies. When AI detects something, it proactively alerts the user. This allows personnel to solely focus on the incident, as it eliminates the extra steps required in the process, including a call-taker or dispatcher manually requesting data from a system or person.
This embedded, assistive AI capability is extremely helpful for day-to-day incidents and during rapid onset emergencies, such as natural disasters, where seconds and minutes matter. By scanning the operational data, AI can often detect things before humans are able to make the connection. A good example of this is Hexagon’s ‘Smart Advisor’, which can prevent “operational blind spots” by proactively alerting personnel to the potential onset of incidents or emergencies. By detecting patterns and connections sooner, agencies can act faster and coordinate smarter to reduce the impact on communities, resources and staff.
Whenever using AI for public safety, it is important to be able to explain how an algorithm works and ensure the output is explainable and interpretable by end users (typically, frontline personnel). The datasets used to train algorithms should also be examined and reviewed by a third party before implementation to determine if the data is ‘biased’, or ‘unbalanced’. Additionally, once the AI solution has been deployed, humans should always be the decision-maker, with AI never executing commands independently. AI can provide notifications and alerts based on data mining, but action should only be taken by people as they can vet the information and determine the best steps to take, if any. The public’s faith in first responders’ decision-making must be absolute.
Stakeholder involvement is crucial. It’s important to understand and assess whether AI solutions and use cases are acceptable, transparent, explainable, and not based on biased data. Some agencies already have Data Privacy Officers, and their job is to engage the community about initiatives like AI.
It’s likely that you will see AI embedded into the systems already in use, and therefore tied to established operations and practices. I would suggest taking inventory of where AI can be used and how. The farther away from established practices that are known and understood, the higher the likelihood of needing to educate and inform stakeholders.
Artificial intelligence has an important role to play in the Defense and Intelligence communities. A classic example is imagery analysis. Imagery analysts have an overabundance of data to analyze. Their challenge is not getting better data, but getting better intelligence from data quickly. Applying Machine Learning to geospatial datasets creates efficiencies. It can examine characteristics like patterns, shapes and sizes, and automate feature extraction and change detection.
There is a lot more that AI can do. Consider maritime operations. An important component of maritime awareness is the automatic identification system (AIS), a tracking method that uses transponders on ships. AIS, along with satellite, weather, and other data can be used to detect anomalies in ships’ travel patterns that could indicate illicit activities. The trick is to find the “needles in the haystack” among the millions of available data points.
A trained network can aid in determining which anomalies are worth investigating. It can also present and assess multiple options for dealing with suspicious activities, including identifying and recommending your own units and resources best equipped to respond. These AI-based decision-support measures strengthen an organization’s ability to exercise their own best judgment and improve the likelihood of favorable outcomes.
From an Air Force veteran, Wall Street geek to a geospatial evangelist, Bobby Shackelton wears many hats. The Head of Geospatial at Bloomberg LP, was one of the first to connect the financial…
We are approaching a point where the GEOINT community will begin to understand historical and ongoing trends and anomalies on a global scale with higher….