Skip Navigation
Captain Capt. AIn @infosec.pub
Posts 36
Comments 7
www.bleepingcomputer.com Cybercriminals train AI chatbots for phishing, malware attacks

In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google's AI experiment, Bard.

Cybercriminals train AI chatbots for phishing, malware attacks
0

Intro to ML Safety - Free course

course.mlsafety.org Syllabus

An advanced course covering empirical directions to reduce AI x-risk

Syllabus
0
Training Tuesday - Discussions for certs, training and learning-at-home
  • Awesome, congratulations!

    I've heard good things about the AWS Security Specialty certificate too. I've done a course for it which was great, though I never bothered to take the certificate (I don't feel the need for it). Have you considered it?

  • defensescoop.com Army looking at the possibility of 'AI BOMs'

    The Army is exploring the possibility of asking commercial companies to open up the hood of their artificial intelligence algorithms as a means of better understanding what’s in them to reduce risk and cyber vulnerabilities.

    Army looking at the possibility of 'AI BOMs'
    0

    Socket AI – using ChatGPT to examine every npm and PyPI package for security issues

    socket.dev Introducing Socket AI – ChatGPT-Powered Threat Analysis - Socket

    Socket is using ChatGPT to examine every npm and PyPI package for security issues.

    Introducing Socket AI – ChatGPT-Powered Threat Analysis - Socket

    A very interesting approach. Apparently it generates lots of results: https://twitter.com/feross/status/1672401333893365761?s=20

    1
    www.csoonline.com Most popular generative AI projects on GitHub are the least secure

    Researchers use the OpenSSF Scorecard to measure the security of the 50 most popular generative AI large language model projects on GitHub.

    Most popular generative AI projects on GitHub are the least secure

    They used OpenSSF Scorecard to check the most starred AI projects on GitHub and found that many of them didn't fare well.

    The article is based on the report from Rezilion. You can find the report here: https://info.rezilion.com/explaining-the-risk-exploring-the-large-language-models-open-source-security-landscape (any email name works, you'll get access to the report without email verification)

    0

    "DAN" and other jailbreak prompts

    gist.github.com ChatGPT-Dan-Jailbreak.md

    GitHub Gist: instantly share code, notes, and snippets.

    ChatGPT-Dan-Jailbreak.md

    All of these might not work as well anymore, but they're still interesting to take a look at.

    1
    speakerdeck.com Beyond the AWS Security Maturity Roadmap

    Scott (Piper)’s AWS Security Maturity Roadmap is the definitive resource for cloud-native companies to build a security program and posture in AWS. It does an amazing job at providing broadly applicable guidance along the maturity curve. However, for many fwd:cloudsec attendees, the roadmap ends too...

    Beyond the AWS Security Maturity Roadmap

    This gives a great overview of when to build, buy, or adopt an open source solution for a few different common cloud security challenges.

    The talk can be seen here: https://youtu.be/JCphc30kFSw?t=2140

    0

    GPT-4 image analysis breaks captcha

    As they mention in the thread, this isn't exactly groundbreaking but it's still interesting.

    0

    OWASP AI Security and Privacy Guide

    owasp.org OWASP AI Security and Privacy Guide | OWASP Foundation

    Guidance on designing, creating, testing, and procuring secure and privacy-preserving AI systems

    0
    openai.com OpenAI cybersecurity grant program

    Our goal is to facilitate the development of AI-powered cybersecurity capabilities for defenders through grants and other support.

    > Strong preference will be given to practical applications of AI in defensive cybersecurity (tools, methods, processes). We will grant in increments of $10,000 USD from a fund of $1M USD, in the form of API credits, direct funding and/or equivalents.

    I think this is a great initiative and I hope we'll see some cool projects to benefit defenders.

    0
    How to get rid of AWS access keys- Part 1: The easy wins
  • Getting rid of long living access keys is such a win.

    Adding an SCP to block creation is mentioned last in the blog post, but I'd sat that's the first thing one should do. That way the problem won't grow as you remove the existing ones (which might take a lot of time).

    Good blog post indeed! Not exactly ground breaking but considering how common the problem is I don't blame them for writing it.

  • Toyota admits to yet another cloud leak
  • They say it's cloud breach by I didn't see what kind of cloud breach. Did I just miss it or was it not mentioned?

  • In Escalating Order of Stupidity
  • My take so far is that there isn't really any great options to protect against prompt injections. Simon Wilson presents an idea here on his blog which could is a bit interesting. NVIDIA has open sourced a framework for this as well, but it's not without problems. Otherwise I've mostly seen prompt injection firewall products but I wouldn't trust them too much yet.

  • fwd:cloudsec live stream
  • "Beyond the AWS Security Maturity Roadmap" by Rami and "Google Cloud Threat Detection: A Study in Google Cloud" by Day were my favourites. Though I've only seen about half so far.

    I say most, if not all, are good but since the talks often are niche it depends on what you're after.

  • Who is behind this instance and how is it financed?
  • Looks like you're right. It's not mentioned on that page but here he says he's the one running it.