Bestgamingpro

Product reviews, deals and the latest tech news

To combat cybercrime, the US government is creating an AI sandbox

In order to better understand cyberthreats and disseminate this knowledge to the public and commercial sectors, the top US security agencies are creating a virtual environment that employs machine learning.

The DHS’ Science and Technology Directorate (S&T) and the CISA’s Cybersecurity and Infrastructure Security Agency (CISA) are working together to create an artificial intelligence (AI) sandbox where researchers may pool their resources and try out new analytical methods for stopping cyberattacks.

There will be on-premise and multi-cloud deployments of CISA’s Advanced Analytics Platform for Machine Learning (CAP-M).

Dangers to Learning

The DHS said that although the environment’s primary focus would be on cyber tasks, it will be “open and extendable” enough to accommodate data sets, tools, and cooperation for other infrastructure security missions.

Experiments of varying complexity will be run in CAP-M, and the resulting data will be evaluated and linked to help businesses of all sizes better defend themselves from the constantly increasing dangers in the cyber realm.

All other government agencies, as well as academic and commercial organisations, will have access to the experimental data collected. The S&T has promised to take privacy issues seriously.

The analytical capacities of AI and machine learning approaches with regards to cyberthreats, as well as the efficacy of these techniques as tools in the battle against them, will be tested as part of the trials. To further automate processes like data export and tweaking, CAP-M will develop a machine learning loop.

Horizon3.ai’s director of pentesting, Monti Knode, echoed these sentiments, calling for such a strategy to be implemented long ago while applauding the opportunity to assess analytical talents.

As Knode put it, “previous failures have contributed massively to warning fatigue throughout the years, driving analysts and practitioners on crazy goose hunts and rabbit holes, as well as true alarms that matter but are buried.”

He said, “[CAP-M] might be a great move,” since “laboratories seldom duplicate the complexity and cacophony of a real production environment.”

Knode hypothesised that the AI might be automatically trained on simulated assaults to understand how they operate and how to recognise them.

Cerberus Sentinel’s biometrics specialist Sami Elhini shared the hope that learning and analysis would lead to a more thorough understanding of threats, but he voiced concern that models could become too generalised, in which case they might miss threats on smaller targets and filter them out as unimportant.

He also voiced safety concerns, saying, “When… exposing [AI/ML] models to a bigger audience, the likelihood of an exploit grows.” He warned that CAP-M might be a target for other countries looking to get insight into, or tamper with, the system.

However, it seems that the general public views the government initiative favourably. Keeper Security co-founder and chief technology officer Craig Lurey shared with The Register that “Supporting and catalysing disjointed private sector R&D activities is a goal of many federally funded R&D initiatives. As a matter of paramount importance to the nation’s safety, cyber security must be given top billing.”

This was echoed by Tom Kellermann, vice president of Contrast Security, who called CAP-M a “essential effort to better information exchange on TTPs [tactics, methods, and procedures] and boost situational awareness throughout American cyberspace.”