- Forbes Technology Council
- Posts
- AI, Risk, & Resilience
AI, Risk, & Resilience
Discover how the EU Cyber Resilience Act, shadow AI, and generative AI are redefining cybersecurity risks and strategies for enterprises.
🎉 Celebrating 10 Years of Excellence in Leadership and Community 🎉
In today’s Tech Pulse, gain insight into how:
The EU Cyber Resilience Act will reshape compliance for manufacturers worldwide, with significant penalties for non-compliance starting in 2026.
Shadow AI is creating hidden vulnerabilities in enterprises as unsanctioned AI tools proliferate and evade traditional security measures.
Gen AI offers both opportunities and risks, with cybercriminals leveraging it for advanced attacks and data privacy concerns on the rise.
Each of these articles is penned by members of Forbes Technology Council, key luminaries shaping the future of technology leadership.
Grab your coffee, and let's dive in!
The EU Cyber Resilience Act: What Manufacturers Need to Know & Do
The EU Cyber Resilience Act (CRA) sets a new global precedent for cybersecurity compliance, with stiff penalties of up to 2.5% of global turnover for non-compliance.
Though not in effect until late 2027, reporting requirements begin in September 2026, forcing manufacturers worldwide to align sooner rather than later. The CRA applies to any product with a digital component sold in the EU.
Is your business ready? Check out the insights below:
👉 The CRA's core requirements:
Conduct cybersecurity risk assessments and ensure products are free of known vulnerabilities before market launch.
Report security incidents within 24 hours and maintain detailed technical documentation.
đź”§ Industry challenges:
Most manufacturers fail current CRA standards due to weak security configurations, outdated libraries, and limited lifecycle support for updates.
Small and medium manufacturers may face the steepest uphill battle in achieving compliance.
🚀 What to do now:
Prioritize reporting: Build workflows and update documentation for rapid vulnerability reporting.
Secure designs: Conduct risk assessments, strengthen encryption, and adopt secure boot and update protocols.

Still Interested in Forbes Technology Council?
As a member, you'll receive:
- Publishing Opportunities: to share your expert insights on Forbes.com through Expert Panels and bylined articles.
- Executive Profile: a professional, SEO-friendly profile on Forbes.com.
- Networking Benefits: access to a member portal to connect with other world-class technology leaders.
- And Much More: from premium travel and lifestyle benefits to exclusive virtual knowledge sharing events, members join to learn and grow with their peers.
Click the button below to continue your application today.

Shadow AI in 2025: Five Critical Insights for Security Leaders
Gen AI tools are infiltrating enterprises at an unprecedented pace, often without formal approval—creating a security minefield known as shadow AI. From unsecured AI apps to unsanctioned long-term usage, shadow AI significantly increases enterprise data risks.
The 2025 State of Shadow AI Report highlights the top insights organizations need to act on now:
👉 Shadow AI Apps Lack Basic Security: Popular AI tools often overlook encryption, MFA, and audit logging—leaving sensitive data exposed.
⚡ The "Popularity Trap": Employees choose feature-rich but insecure tools, creating a blind spot for IT teams.
📊 Overreliance on OpenAI: 53% of shadow AI activity revolves around OpenAI products, amplifying risks tied to one provider.
⏳ Shadow AI Lingers: Many unsanctioned AI tools remain in use for over a year, embedding into workflows and complicating removal efforts.
🏢 Small Businesses, Big Risks: Smaller firms face the highest density of shadow AI, with over 25% of employees using unsanctioned tools.
Generative AI in Cybersecurity: A Double-Edged Sword
Generative AI, the technology behind ChatGPT and similar tools, is reshaping workflows and boosting efficiency. But it’s not all upside as cybercriminals are weaponizing AI to enhance attacks, while unsupervised internal use of AI tools introduces significant vulnerabilities.
Here’s all you need to know:
🔥 AI's Growing Threats:
Social Engineering: AI generates flawless phishing emails and even realistic voice/face clones to deceive employees.
Sophisticated Malware: AI-powered malware adapts to systems, evading detection with techniques like polymorphic code rewriting.
âś… How to Adopt AI Safely:
Enforce access controls: Limit who can use AI tools and train staff on data security.
Avoid sharing confidential data: Public AI platforms shouldn’t host proprietary information.
Verify AI outputs: Review generated content for accuracy and bias.
Wrapping Up
If these articles sparked your interest, we have a network that you will absolutely love: Forbes Technology Council.
This exclusive, vetted community brings together the brightest minds in technology — founders, CEOs, CIOs, CTOs, CISOs, and other leaders of technology-focused teams.
Put yourself at the forefront of innovation with access to publishing opportunities on Forbes.com, a personalized, SEO-friendly Executive Profile, and the chance to network with other respected leaders in the field.
Join Forbes Technology Council today, and become part of a group driving transformation in technology.