Do not roll the dice at Cybersecurity for AI – Opinion News
5 mins read

Do not roll the dice at Cybersecurity for AI – Opinion News

By Patrice Caine

The debate around AI is intensified, and it is also skepticism. But AI is here to stay. While some headlines criticize technical giants for AI-powered social media or dubious consumer tools, AI itself becomes necessary. Its efficiency is unmatched and promising gains that no business or government can ignore.

IN IndiaForecasts indicate that generative AI can add between $ 359 billion and $ 438 billion to GDP in 2029-30 recently quoted by Michael Debabrata Patra, Deputy Governor in the reserve India’s bank.

AI Tracker: Conversation to Action, Agents in Workforce, News Rundown & Deep Fake Detect

AI Tracker: Conversation to Action, Agents in Workforce, News Rundown & Deep Fake Detect

Don't be below digital gripping

Don’t be below digital gripping

European Union, EU, AI, artificial intelligence, data secrecy, technology

To put ‘a’ (liability) in ai

Can we praise integrity?

Can we praise integrity?

This means, very soon, AI will be as integrated into our lives as electricity – to drive our cars, shape our healthcare, secure our banks and keep the lamps lit. The big question is, are we ready for what is coming now?

The public conversation around AI has largely focused on ethics, error information and the future of work. But an important issue flies under the radar: the security of AI itself. With AI embedded in almost all parts of society, we create massive, interconnected systems with the power to shape – or, in the wrong hands, crack – our daily life. Are we prepared for the risks?

When we give AI more control over data – from diagnosing diseases to dealing with physical access to sensitive locations – the fallout from a cyber attack grows exponentially. Disturbing are some AIS as delicate as they are powerful.

Recognizes the threat to AI assets, India’s Ministry of Electronics and Information Technology Recently hosted consultations for the establishment of the India AI Safety Institute to ensure a safe and ethical distribution of AI technicians throughout India. This will not only promote India’s domestic capacity in AI security, but also encourages greater cooperation and global commitment. These steps are good for the future as the government and private organizations must work in collaboration to secure AI assets.

It is also interesting to note India’s Defense Manager Artificial intelligence (Etai) ‘frame and guidelines for the armed forces in October this year. He emphasized that it is important to ensure that these systems not only function as intended but also are resistant to attacks from opponents. This statement is as true for defense as for civilian applications.

What could be the possible ways to attack AI? There are two primary ways to attack AI systems. The first is to steal data and compromise with everything from personal health Poster to sensitive corporate secrets. Hackers can trick models into spitting safe information, whether it is by utilizing medical databases or by deceiving chatbots into circumventing their own safety nets.

The second is to sabotage the models themselves, skew results in dangerous ways. An AI-driven car Licked that a “stop” sign that “70 km / h” illustrates how real the threat can be. And when AI expands, the list of possible attacks will only grow.

Still, it would be the biggest mistake to abandon AI because of these risks. Sacriving the competitiveness for security would leave organizations depending on third parties, missing experience and control over a technology that quickly becomes significant.

So how do we harvest AI’s benefits without playing its risks?

Here are three critical steps:

Choose AI wisely. Not all AIs are equally vulnerable to attacks. For example, large language models are very susceptible because they rely on large data methods and statistical methods. But other types of AI, such as symbolic or hybrid models, are less data intensive and function on explicit rules, which makes them more difficult to crack.

Distribute proven defense. Tools such as digital water labeling, cryptography and adapted education can strengthen AI models against new threats. For example, Thales “Battle Box” lets cybersecurity-team stress-test AI models to find and fix vulnerabilities before hackers can use them.

Level up organizational cyber security. AI does not work isolated – it is part of a larger information ecosystem. Traditional cyber security measures must be strengthened and tailored to the AI ​​era. This starts by training employees; After all, human fault remains Achilles heel in all cybersecurity systems.

Some may think that the battle for AI is just another chapter in the ongoing collision between bad actors and unintentional victims. But this time the efforts are higher than ever. If AI’s security is not a priority, we risk cedera control to those who would use their power for injury.

The author is CEO of Thales Group.

Disclaimer clause: Views expressed are personal and do not reflect the official position or policy for Financial Express Online. Reproducing this content without permission is prohibited.