Bestgamingpro

Product reviews, deals and the latest tech news

ChatGPT is used by Orca Security to use AI to secure the cloud

Keeping data safe in the cloud is a complex task. But with the help of AI and automation using solutions like ChatGPT, security teams can work toward optimising day-to-day operations to respond to cyber occurrences more efficiently.

Orca Security, a cloud cybersecurity firm located in Israel, which was recently valued at $1.8 billion in2021, is one supplier whose model is representative of this trend. Orca, a leader in cloud-based security, just announced it will roll out the first version of its proprietary ChatGPT extension to the public. The integration will handle security notifications and give users detailed information on how to fix the problem.

At a higher level, this connection shows how ChatGPT may aid businesses in streamlining their security operations workflows, allowing for quicker processing of alerts and events.

Management of notifications has been a constant source of stress for security professionals. In fact, seventy percent of security professionals say they experience emotional fallout from their jobs handling IT danger alerts in their personal life.

However, over half of those polled said they lack confidence in their ability to prioritise warnings and respond quickly.

This lack of trust is due, in part, to the time and effort required to determine whether or not each warning represents a genuine security risk and, if so, to take appropriate action as quickly as feasible.

This is especially difficult in circumstances when there are many, incompatible solutions, such as cloud and hybrid workplaces.

This is a laborious process with minimal room for mistake. Thus, Orca Security plans to use ChatGPT (which is based on GPT-3) to facilitate the automation of the alert management process for its users.

We used GPT-3 to improve our system’s capacity to provide remedial guidance in the context of Orca security warnings. “This integration substantially simplifies and speeds up mean time to resolution (MTTR), boosting our clients’ capacity to offer quick remediations and continually maintain their cloud environments safe,” said Itamar Golan, head of data science at Orca Security.

In essence, Orca Security employs a custom pipeline to forward security alerts to ChatGPT3, which then processes the data, noting the assets, attack vectors, and potential impact of the breach, and provides, directly into project tracking tools like Jira, a detailed explanation of how to rectify the situation.

In addition to the Cloud Console and the command line, users may also use infrastructure as code (Terraform and Pulumi) to make necessary changes.

It’s a strategy made to assist security teams in making the most of their current setup. In light of the fact that “most security teams are bound by limited resources,” Golan explains how this might reduce their daily effort.

Should we consider ChatGPT a net benefit for cyber security?

While Orca Security’s implementation of ChatGPT exemplifies the potential for AI to improve corporate security, some companies are less confident in the impact AI-based solutions will have on the threat landscape.

For example, last week, Deep Instinct presented threat intelligence study analysing the dangers of ChatGPT and coming to the conclusion that “AI is better at developing malware than offering means to detect it.” That is to say, harmful code is easier to create than detect by security professionals.

Alex Kozodoy, manager of cyber research at Deep Instinct, put it this way: “Essentially, attacking is always easier than defending (the best defence is attacking), especially in this case, since ChatGPT allows you to bring back to life old forgotten code languages, alter or debug the attack flow in no time, and generate the whole process of the same attack in different variations (time is a key factor).

On the other hand, as Kozodoy points out, “it is very difficult to defend when you don’t know what to expect,” which means that “defenders are able to be prepared for a limited set of attacks and for certain tools that can help them to investigate what has happened” (typically after they’ve already been breached).

The good news is that defensive AI processes will improve and have a greater chance of keeping up with an ever-increasing number of AI-driven threats as more enterprises begin to experiment with ChatGPT to safeguard on-premise and cloud infrastructure.