toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

277
active users

#threatmodel

1 post1 participant0 posts today

Looking at some #AI generated #threatmodel output and it listed stealing a user's credentials and using them in the "Spoofing" category. I was uncertain. Is that spoofing or elevation of privilege. So I wander over to a #microsoft page on #stride.

They say it's spoofing, which is fine. It's reasonable. I don't care as long as we all agree.

But in that table, that's literally the only example of spoofing. There are a LOT of other kinds of things that could be called spoofing. If you're gonna have only one example of spoofing, I don't think stealing credentials is the best example.

learn.microsoft.comThreats - Microsoft Threat Modeling Tool - AzureThreat category page for the Microsoft Threat Modeling Tool, containing categories for all exposed generated threats.
Continued thread

Lastly, there's the training data. I work for #AWS (so these are strictly my personal opinions). We are opinionated about the platform. We think that there are things you should do and things you shouldn't. If you have deep knowledge of anything (Microsoft, Google, NodeJS, SAP, whatever) you will have informed opinions.

The threat models that I have seen, that use general purpose models like Claude Sonnet, include advice that I think is stupid because I am opinionated about the platform. There's training data about AWS in the model that was authored by not-AWS. And there's training data in the model that was authored by AWS. The former massively outweighs the latter in a general-purpose, trained-on-the-Internet model.

So internal users (who are expected to do things the AWS way) are getting threats that (a) don't match our way of working, and (b) they can't mitigate anyway. Like I saw an AI-generated threat of brute-forcing a cognito token. While the possiblity of that happening (much like buying a winning lottery ticket) is non-zero, that is not a threat that a software developer can mitigate. There's nothing you can do in your application stack to prevent, detect, or respond to that. You're accepting that risk, like it or not, and I think we're wasting brain cells and disk sectors thinking about it and writing it down.

The other one I hate is when it tells you to encrypt your data at rest in S3. Try not to. There's no action for you to take. The thing you control is which key does it and who can use that key.

So if you have an area of expertise, the majority of the training data in any consumer model is worse than your knowledge. It is going to generate threats and risks that will irritate you.

4/fin

Continued thread

Threat models evolve over time, the same as your software does. Nobody is building a save/load feature into their AI powered threat model. Getting deterministic output from consumer-grade LLMs is not a given. So even if you DO create save/reload capability, it's imperfect.

All the tools I've seen start every session from a blank sheet of paper. So If you're revisiting an app that you threat modeled before, because you want to update your model, you're going to start from scratch.

3/n

Continued thread

Related to this, nobody seems to account for the fact that LLMs bullshit sometimes. If you pin someone down and say "the user of your AI-powered threat modeller: do they know how to do a threat model without AI?" Many people will say "yes." Because to say "no" is to admit that the people will be blindly following LLM output that might be total bullshit.

The goal, however, of many of these systems is to make threat modeling more accessible to people who don't know how to do it. To do that, though, you'd have to be more skeptical about your user, and spend some time educating them. Otherwise, they leave the process no smarter than they began.

Honestly, I think a lot of people think the threat model is going to be done entirely by the AI and they want to build a system where the human just consumes and uses it.

2/n

I have seen a lot of efforts to use an #LLM to create a #ThreatModel. I have some insights.

Attempts at #AI #ThreatModeling tend to do 3 things wrong:

  1. They assume that the user's input is both complete and correct. The LLM (in the implementations I've seen) never questions "are you sure?" and it never prompts the user like "you haven't told me X, what about X?"
  2. Lots of teams treat a threat model as a deliverable. Like we go build our code, get ready to ship, and then "oh, shit! Security wants a threat model. Quick, go make one." So it's not this thing that informs any development choices during development. It's an afterthought that gets built just prior to #AppSec review.
  3. Lots of people think you can do an adequate threat model with only technical artifacts (code, architectuer, data flow, documentation, etc.). There's business context that needs to be part of every decision, and teams are just ignoring that.

1/n

Some of my colleagues at #AWS have created an open-source serverless #AI assisted #threatmodel solution. You upload architecture diagrams to it, and it uses Claude Sonnet via Amazon Bedrock to analyze it.

I'm not too impressed with the threats it comes up with. But I am very impressed with the amount of typing it saves. Given nothing more than a picture and about 2 minutes of computation, it spits out a very good list of what is depicted in the diagram and the flows between them. To the extent that the diagram is accurate/well-labeled, this solution seems to do a very good job writing out what is depicted.

I deployed this "Threat Designer" app. Then I took the architecture image from this blog post and dropped that picture into it. The image analysis produced some of the list of things you see attached.

This is a specialized, context-aware kind of OCR. I was impressed at boundaries, flows, and assets pulled from a graphic. Could save a lot of typing time. I was not impressed with the threats it identifies. Having said that, it did identify a handful of things I hadn't thought of before, like EventBridge event injection. But the majority of the threats are low value.

I suspect this app is not cheap to run. So caveat deployor.
#cloud #cloudsecurity #appsec #threatmodeling

#DuckDuckGo is now offering free, #anonymized access to a number of fast #AI #chatbots that won't train in your data. You currently don't get all the premium models and features of paid services, but you do get access to privacy-promoting, anonymized versions of smaller models like GPT-4o mini from #OpenAI and open-source #MoE (mixture of experts) models like Mixstral 8x7B.

Of course, for truly sensitive or classified data you should never use online services at all. Anything online carries heightened risks of human error; deliberate malfeasance; corporate espionage; legal, illegal, or extra-legal warrants; and network wiretapping. I personally trust DuckDuckGo's no-logging policies and presume their anonymization techniques are sound, but those of us in #cybersecurity know the practical limitations of such measures.

For any situation where those measures are insufficient, you'll need to run your own instance of a suitable model on a local AI engine. However, that's not really the #threatmodel for the average user looking to get basic things done. Great use cases include finding quick answers that traditional search engines aren't good at, or performing common AI tasks like summarizing or improving textual information.

The AI service provides the typical user with essential AI capabilities for free. It also takes steps to prevent for-profit entities with privacy-damaging #TOS from training on your data at whim. DuckDuckGo's approach seems perfectly suited to these basic use cases.

I laud DuckDuckGo for their ongoing commitment to privacy, and for offering this valuable additional to the AI ecosystem.

duckduckgo.com/chat

duckduckgo.comDuckDuckGo AI Chat at DuckDuckGoDuckDuckGo. Privacy, Simplified.