toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

278
active users

#aisecurity

1 post1 participant0 posts today

New AI Security Risk Uncovered in Microsoft 365 Copilot

A zero-click vulnerability has been discovered in Microsoft 365 Copilot—exposing sensitive data without any user interaction. This flaw could allow attackers to silently extract corporate data using AI-integrated tools.

If your organization is adopting AI in productivity platforms, it’s time to get serious about AI risk management:
• Conduct a Copilot risk assessment
• Monitor prompt histories and output
• Limit exposure of sensitive data to AI tools
• Update your incident response plan for AI-based threats

AI can boost productivity, but it also opens new doors for attackers. Make sure your cybersecurity program keeps up. Contact our LMG Security team if you need a risk assessment or help with AI policy development.

Read the article: bleepingcomputer.com/news/secu

AI is the new attack surface—are you ready?

From shadow AI to deepfake-driven threats, attackers are finding creative ways to exploit your organization’s AI tools, often without you realizing it.

Watch our new 3-minute video, How Attackers Target Your Company’s AI Tools, for advice on:

▪️ The rise of shadow AI (yes, your team is probably using it!)
▪️ Real-world examples of AI misconfigurations and account takeovers
▪️ What to ask vendors about their AI usage
▪️ How to update your incident response plan for deepfakes
▪️ Actionable steps for AI risk assessments and inventories

Don’t let your AI deployment become your biggest security blind spot.

Watch now: youtu.be/R9z9A0eTvp0

Only one week left to register for our next Cyberside Chats Live event! Join us June 11th to discuss what happens when an AI refuses to shut down—or worse, starts blackmailing users to stay online?

These aren’t science fiction scenarios. We’ll dig into two real-world incidents, including a case where OpenAI’s newest model bypassed shutdown scripts and another where Anthropic’s Claude Opus 4 generated blackmail threats in an alarming display of self-preservation.

Join us as we unpack:
▪ What “high-agency behavior” means in cutting-edge AI
▪ How API access can expose unpredictable and dangerous model actions
▪ Why these findings matter now for security teams
▪ What it all means for incident response and digital trust

Stick around for a live Q&A with LMG Security’s experts @sherridavidoff and @MDurrin. This session will challenge the way you think about AI risk!

Register today: lmgsecurity.com/event/cybersid

june25 cyberside chats live!
LMG SecurityCyberside Chats: Live! When AI Goes Rogue: Blackmail, Shutdowns, and the Rise of High-Agency Machines | LMG SecurityIn this quick, high-impact session, we’ll dive into the top three cybersecurity priorities every leader should focus on. From integrating AI into your defenses to tackling deepfake threats and tightening third-party risk management, this discussion will arm you with the insights you need to stay secure in the year ahead.

Almost every organization is using some type of AI, but are you securing it?

Download our free tip sheet: Adapting to AI Risks: Essential Cybersecurity Program Updates

From deepfake response plans to AI-specific access controls, this checklist helps you modernize your cybersecurity program and stay ahead of emerging threats.

Check it out: lmgsecurity.com/resources/adap

🎉 A new Brand Story is live — this time with eSentire!

We sat down with Dustin Hillard, CTO at #eSentire, for a powerful conversation about #AgenticAI and what it really means to reach human equivalency in security operations.

From decision-making to autonomous action, this isn’t just theory — it’s a real-world look at outcomes when AI is trained and tuned with purpose.

🎥 Watch the video:
youtu.be/qmca7RCzSAQ

📝 Read the full story:
itspmagazine.com/their-stories

🔎 Learn more about eSentire here:
itspm.ag/esentire-sorry4ek

Thanks to eSentire for supporting the conversation and helping us explore where AI meets security in the real world.

Sean Martin, CISSP & Marco Ciappelli
Co-Founders at ITSPmagazine

AI Coding Assistants Can be Both a Friend & a Foe

New research shows that GitLab's AI assistant, Duo, can be tricked into writing malicious code and even leaking private source data through hidden instructions embedded in developer content like merge requests and bug reports.

How? Through a classic prompt injection exploit that inserts secret commands into code that Duo reads. This results in Duo unknowingly outputting clickable malicious links or exposing confidential information.

While GitLab has taken steps to mitigate this, the takeaway is clear: AI assistants are now part of your attack surface. If you’re using tools like Duo, assume all inputs are untrusted, and rigorously review every output.

Read the details: arstechnica.com/security/2025/

Ars Technica · Researchers cause GitLab AI developer assistant to turn safe code maliciousBy Dan Goodin

At the recent #RSAC2025 conference, LMG Security's @sherridavidoff and @MDurrin drew packed crowds with their sessions on how hackers use AI to exploit stolen source code and a hands-on tabletop lab exploring deepfake cyber extortion.

We’ve received a lot of inquiries about these sessions! If you couldn’t attend RSA and you're interested in these topics, we also offer custom training and tabletop exercises to help your team prepare for the next generation of AI-powered cyber threats.

Contact us to learn more: lmgsecurity.com/contact-us/

This is about to happen! Join us!

How To Detect And Mitigate Non-Human Identity And Crytographic Vulnerabilities — An ITSPmagazine Webinar With SandboxAQ
Thursday, May 15, 2025 | 1:00 PM 2:00 PM EST

Unmanaged cryptographic assets and non-human identities have left security teams blind to critical risks. These gaps have fueled vulnerabilities, breaches, compliance challenges, and operational drag across enterprise environments.

Join us to see how #AQtiveGuard transforms this landscape.

✨ crowdcast.io/c/how-to-detect-a

More than just visibility, AQtive Guard unifies your non-human identities and cryptographic assets into a single inventory to deliver end-to-end visibility, deeper risk analysis, and streamlined compliance in a single pane of glass—with automated discovery, real-time threat detection, and root cause analysis powered by their unique LQM.

Seamlessly integrated into your existing stack, it’s the AI-driven SaaS platform built to secure today’s systems—and tomorrow’s.

By attending, you will get to:
Discover how to gain unified visibility into cryptographic assets and non-human identities —including API keys, certificates and service accounts—in cloud environments

Explore how AQtive Guard empowers security teams with automated discovery, threat detection, and root cause analysis—enabling faster remediation, reduced risk, and stronger compliance without disrupting existing workflows.

Learn how to future-proof your security posture, with a platform designed for AI Security Operations, Post-Quantum Cryptography readiness, and seamless integration into your existing security stack.

PANELISTS
Marc Manzano
General Manager of Cybersecurity, SandboxAQ
MODERATORS
Sean Martin, CISSP Co-Founder, ITSPmagazine
Marco Ciappelli Co-Founder, ITSPmagazine

Can’t attend the live webinar? All registrants get exclusive access with a link to rewatch the recording.

Register To Attend: crowdcast.io/c/how-to-detect-a

#cybersecurity, #cryptography, #AIsecurity, #infosec, #webinar, #securitytools, #threatdetection, #cloudsecurity, #sandboxAQ, #ITSPmagazine #tech #technology #quantum

crowdcastHow To Detect And Mitigate Non-Human Identity And Crytographic Vulnerabilities — An ITSPmagazine Webinar with SandboxAQRegister now for How To Detect And Mitigate Non-Human Identity And Crytographic Vulnerabilities — An ITSPmagazine Webinar with SandboxAQ on crowdcast, scheduled to go live on May 15, 2025, 01:00 PM EDT.

AI-powered features are the new attack surface! Check out our new blog in which LMG Security’s Senior Penetration Tester Emily Gosney @baybedoll shares real-world strategies for testing AI-driven web apps against the latest prompt injection threats.

From content smuggling to prompt splitting, attackers are using natural language to manipulate AI systems. Learn the top techniques—and why your web app pen test must include prompt injection testing to defend against today’s AI-driven threats.

Read now: lmgsecurity.com/are-your-ai-ba

LMG SecurityAre Your AI-Backed Web Apps Secure? Why Prompt Injection Testing Belongs in Every Web App Pen Test | LMG SecurityDiscover how prompt injection testing reveals hidden vulnerabilities in AI-enabled web apps. Learn real-world attack examples, risks, and why your pen test must include LLM-specific assessments.

Monday news from ITSPmagazine 🙂 #happymonday!

Join Marc Manzano, Sean Martin, CISSP and me on this week SandboxAQ Webinar!

After an incredible conversation with Marc on the #RSAC floor in San Francisco — where Sean and I used every second of our time and still had more to explore — I knew the #Sandbox Story couldn’t stop there.

If you missed that on-location episode from #RSAC2025, catch it here:

👉 Security at the Edge of Change – A Brand Story with Marc Manzano from SandboxAQ

itspmagazine.com/their-stories

Now, we’re keeping the momentum going with a live ITSPmagazine webinar you don’t want to miss — and I won’t either. 🤘😬

🔐 How To Detect And Mitigate Non-Human Identity And Cryptographic Vulnerabilities | An ITSPmagazine Webinar with SandboxAQ

Join Marc, Sean, and me as we dig deeper into how SandboxAQ is tackling one of today’s most urgent security challenges.

Unmanaged cryptographic assets and non-human identities have left security teams blind to critical risks. These gaps have fueled vulnerabilities, breaches, compliance challenges, and operational drag across enterprise environments.

By attending, you’ll:

🔸 Gain visibility into cryptographic assets and non-human identities like API keys, certificates, and service accounts

🔸 See how #AQtiveGuard enables automated discovery, threat detection, and root cause analysis without disrupting workflows

🔸 Learn how to future-proof your security with Post-Quantum Cryptography readiness and AI-powered #SecOps

📌 Learn more:

👉 itspmagazine.com/itspmagazine-

📅 REGISTER NOW:

Can’t attend the live webinar? All registrants get exclusive access with a link to rewatch the recording.

👉 crowdcast.io/c/how-to-detect-a

Share the news and join us!

See you live on Thursday!

#infosec

#cybersecurity

#technology

#tech

#infosecurity

#AIsecurity

#postquantum

#cryptography

#identitymanagement

Ever wondered what really makes those powerful AI language models tick? 🤔 Andrej Karpathy offers a clear explanation, revealing the secrets behind their training and architecture. Discover how they're evolving and the key security hurdles we need to overcome. A must-read for anyone curious about the behind-the-scene aspects AI! alanbonnici.com/2025/05/demyst #ArtificialIntelligence #NLP #LanguageModels #AISecurity #TechInsights #FutureofAI #TTMO

www.alanbonnici.comDemystifying LLMs with Andrej KarpathyThis blog is about security and computing related topics with occassional hobby activities thrown in.

Check out TechSpot’s new article featuring LMG Security’s @sherridavidoff and @MDurrin on how “Evil AI” is accelerating cyber threats.

The article recaps their #RSAC2025 presentation, where they demonstrated how rogue AI tools like WormGPT—AI stripped of ethical guardrails—can rapidly detect and help exploit real-world vulnerabilities.

From identifying SQL flaws to delivering working Log4j and Magento exploits, Sherri and Matt reveal how AI is arming cybercriminals faster than traditional defenses can keep up.

Read the full TechSpot article: techspot.com/news/107786-rsa-c

TechSpotAt RSA Conference, experts reveal how "evil AI" is changing hacking foreverOn a recent morning at the annual RSA Conference in San Francisco, a packed room at Moscone Center had gathered for what was billed as a technical...

🎙️ When AI writes code, builds models, and simulates threats… who checks the checker?

In this last On Location Conversation from #RSAC2025, Alex Kreilein and John Sapp Jr. join Sean Martin, CISSP to explore what trust actually means in the age of AI-generated security tooling — and how modern #AppSec teams must rethink validation, #resiliency, and #risk.

This episode cuts deep into:

Why “trust the output” is not enough in AI-driven workflows
How #AI security debt is becoming the new tech debt
Why we need #zerotrust thinking applied to models and agents
The real shift: from patching CVEs to building resilient architecture
The role of traceability, governance, and context-driven decision-making

If you’re serious about secure AI, application security, and shifting AppSec left (the right way), this conversation will challenge what you think you know — and help reframe what secure development actually looks like.

🎥 Watch the full video:
👉 youtu.be/kJdQz9LmT6s

🎧 Listen to the audio podcast:
👉 eventcoveragepodcast.com/episo

✨ Thank you to our Full Coverage Sponsors:
ThreatLocker 👉 itspm.ag/threatlocker-r974
Akamai Technologies 👉 itspm.ag/akamailbwc
BLACKCLOAK 👉 itspm.ag/itspbcweb
SandboxAQ 👉 itspm.ag/sandboxaq-j2en
Archer Integrated Risk Management 👉 itspm.ag/rsaarchweb
ISACA 👉 itspm.ag/isaca-96808
Object First 👉 itspm.ag/object-first-2gjl
Edera 👉 itspm.ag/edera-434868

🎙️ Explore more RSAC 2025 coverage:
👉 itspmagazine.com/rsa-conferenc

🎧 Catch all of our event conversations:
👉 itspmagazine.com/technology-an

🎤 Want to tell your Brand Story Briefing as part of our coverage?
👉 itspm.ag/evtcovbrf

📆 Want Sean Martin, CISSP and Marco Ciappelli to cover your event or moderate your panel?
👉 itspmagazine.com/contact-us

Microsoft Copilot for SharePoint just made recon a whole lot easier. 🚨
 
One of our Red Teamers came across a massive SharePoint, too much to explore manually. So, with some careful prompting, they asked Copilot to do the heavy lifting...
 
It opened the door to credentials, internal docs, and more.
 
All without triggering access logs or alerts.
 
Copilot is being rolled out across Microsoft 365 environments, often without teams realising Default Agents are already active.
 
That’s a problem.
 
Jack, our Head of Red Team, breaks it down in our latest blog post, including what you can do to prevent it from happening in your environment.
 
📌Read it here: pentestpartners.com/security-b