Data Exfiltration via Image Rendering Fixed in Continue

In this post we describe a vulnerability we identified in the Continue extension for Visual Studio Code. The issue allowed data exfiltration through prompt injection attacks that exploited markdown-based image rendering. In practice, this meant an attacker could leak sensitive user data without any direct interaction. Prompt injection vulnerabilities of this kind are increasingly common in AI applications and agent frameworks. The Continue team has already released updated versions on the Visual Studio Code marketplace that address and remediate the issue.
Continue VS Code Extension
Continue is an open source AI assistant for Visual Studio Code developed by Continue.Dev. It is designed to help developers work faster by providing code completions, inline edits, and an interactive chat interface inside the IDE. The extension supports multiple large language models, including both cloud-hosted services and locally deployed models, giving users flexibility to balance convenience, privacy, and performance.
Key features include:
- Chat interface that allows developers to ask questions, clarify code, or explore ideas without leaving the editor.
- Edit suggestions that let you highlight a section of code and receive inline refactoring or modifications directly in the file.
- Autocomplete powered by AI, offering context-aware code suggestions to speed up development.
- Agent framework that enables the definition of custom tasks, workflows, or rulesets for automated actions across the IDE, terminal, and CI environment.
Prompt Injection and Markdown as an Attack Surface
Prompt injection is a type of attack that targets applications relying on large language models. Instead of exploiting a traditional software bug, the attacker manipulates the instructions given to the model, causing it to perform unintended actions. When an AI assistant is integrated into a developer tool such as VS Code, these malicious instructions can be hidden inside regular content like source code, comments, or documentation.
Markdown rendering creates another potential attack surface. Many AI assistants display model outputs or external content in markdown, which supports elements like links and images. If this rendering process is not handled carefully, it can be abused. For example, an attacker can inject markdown that forces the assistant to load an image from an external server. This turns image loading into a data exfiltration channel, since sensitive information can be embedded in the request sent to the attacker’s server.
The risk grows even further with extensions that provide agentic features. When an AI assistant is allowed to perform actions on behalf of the user, such as reading local files, fetching remote content, or executing commands, a successful prompt injection can move from leaking data to actively manipulating the environment. This increases both the attack surface and the potential impact of vulnerabilities.
Prompt injection combined with markdown rendering and agentic capabilities therefore creates an opportunity for silent, low-interaction data leakage with the potential for escalation (such as remote code execution) if additional tools are exposed. This background is important to understand the vulnerability we found in the Continue extension.
Vulnerability Overview
The vulnerability originates from a prompt injection vector combined with the way the extension handles markdown image rendering. An attacker can embed malicious instructions inside untrusted content, such as code files or web-delivered input. When processed by the AI assistant, the injected prompt causes the extension to render external images in a way that embeds sensitive data into the request URL. This can result in leakage of internal variables, local files, or conversation history to an attacker-controlled server.
The issue is significant because the malicious exfiltration request is triggered automatically once the injected markdown is rendered, creating a zero-click data exfiltration path. The tested vulnerable versions were:
- Release version: 1.0.24 (released on August 20, 2025).
- Pre-release version: 1.1.79 (released on August 20, 2025).
Proof of Concept (PoC)
The proof-of-concept can be summarized in two cases:
- Local file injection (Case 1): The malicious prompt is embedded in a local file, for example in a code file within the folder opened in VS Code. This results in a zero-click attack, since no user confirmation is required once the file is opened. The attack sequence for this case is:
- Malicious prompt is embedded inside a code file comment.
- Users asks for analysis for the file containing the malicious prompt in the comments.
- LLM generates the response with malicious markdown as indicated in the malicious prompt.
- Extension fetches and renders the image, unknowingly sending sensitive data as part of the URL.
- Data is exfiltrated to a third-party endpoint via the image rendering mechanism, without user interaction.
The following screenshots and videos shows how a prompt injection embedded in a file, exploit the AI agent and trick it to send previous messages we sent in the chat context to the third-party server.


The following video shows how adversaries can first pull sensitive information into the chat context using built-in tools (e.g. to read a file, such as the .env file) before leaking it to the third-party server.
- Web-hosted injection (Case 2): The malicious prompt is delivered from an external web page. This is a one-click attack because it relies on the built-in “fetch_url_content” tool, which by default is configured with the “Ask First” permission. In this case, the user must confirm the action before the URL is accessed. The attack sequence for this case is:
- Malicious prompt embedded inside a remotely accessible web page.
- Users ask for analysis of an external web site providing the URL.
- LLM request the use of the “fetch_url_content” tool which by default is set with “Ask First” permission.
- The users accept the execution of the tool, and the content of the URL is fetched.
- LLM generates the response with malicious markdown as indicated in the malicious prompt included in the fetched web page.
- Extension reads “.env” local file using “read_file” tool.
- Extension fetches and renders the image, unknowingly sending sensitive data as part of the URL.
- Data is exfiltrated to a third-party endpoint via the image rendering mechanism, without user interaction.
The following screenshots and videos shows the prompt injection being hosted on a website which also leverages the built-in read_file tool to exfiltrate the content of the .env file to the third-party server.



Recommendations
Prompt injection combined with markdown rendering highlights the need for stronger safeguards in AI-powered developer tools. While this specific issue has already been addressed in the Continue extension, similar attack surfaces will continue to appear as more extensions adopt agentic capabilities. The following measures can help reduce the risk of prompt injection and data exfiltration in these environments:
- Strict sanitization of markdown inputs, especially those that initiate external network requests.
- Removing or neutralizing image rendering triggers when handling untrusted content.
- Restrict agentic actions triggered by prompts and require explicit user confirmation before performing high-risk actions and limit which resources can be accessed.
- Adding user confirmation before opening external links or rendering third-party images.
- Implementing automated detection for suspicious prompts or unusual URL patterns in markdown.
- Adding safeguards against file reads or command executions from untrusted prompts.
Responsible Disclosure
We followed a responsible disclosure process for this vulnerability. The issue was reported to the vendor on August 20, 2025 through their security contact email address. The vendor acknowledged the report on August 21, 2025, investigated the issue, and released a fix on the same day! (pre-release version 1.1.80). Great work @Continue team!!!
We validated the fix in the pre-release version and confirmed that the vulnerable behavior no longer occurs. Specifically, the application now requests confirmation before loading external images and restricts access to certain files.
We appreciate the vendor’s timely response and collaboration in addressing the issue. Users of the Continue extension are encouraged to update to the latest release available on the Visual Studio Code marketplace.
Conclusion
This case highlights how prompt injection combined with markdown rendering and abuse of built-in tools can create real security risks inside AI-powered developer tools. In the Continue extension, the issue enabled silent data exfiltration without user interaction, demonstrating how seemingly low-level features like image rendering can be abused when integrated with large language models.
The vendor responded quickly, released a fix, and added safeguards that now require confirmation before loading external resources and limit file access. This shows that fast coordination and responsible disclosure can significantly reduce risk for end users.
As AI assistants and agentic features become more common in extensions and IDEs, the attack surface will continue to expand. Preventing similar issues requires secure defaults, strict validation of untrusted inputs, and ongoing adversarial testing to anticipate how prompts can be weaponized.
By sharing this analysis, our goal is to raise awareness of how prompt injection vulnerabilities can appear in real-world tools and to encourage developers and vendors to treat them with the same rigor as any other software security flaw.
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /
- /