Frequently asked questions

Administrators control AI rollout and access through settings, permissions, and policies. Data collected and processed depends on the AI workflow:

  • Session Insights (documentation): Captures session interactions needed to generate summaries and action steps, with anonymization applied before broader processing.
  • Tia (troubleshooting): Uses a limited technical device data snapshot taken at session start, screenshots taken only when explicitly triggered by the user, Tia conversation history, and permission-based Session Insights. It does not collect user content, personal documents, communication data, keystrokes, or activity data.
  • AI-assisted scripting: Uses user prompts and optional context from prior Session Insights to generate script drafts.

Session data is first captured and anonymized on the TeamViewer client using a rule‑based anonymization layer. Sensitive elements (including emails, URLs, passwords, credit card numbers, and IPs) are removed before leaving the endpoint. Additional anonymization occurs in cloud‑based services.

Yes. TeamViewer uses third-party LLM services (Azure OpenAI and Google Gemini) under defined conditions to deliver AI outputs. For Session Insights, data is anonymized before broader processing, and the workflow uses encrypted transport (HTTPS/TLS).

Yes.

  • In transit: Data is encrypted using HTTPS (TLS) between TeamViewer clients, TeamViewer cloud services, cloud storage, and AI services (as shown in the Session Insights architecture).
  • At rest: Cloud storage uses industry-standard AES-256 encryption (as stated).
  • Additional protection: Session Insights and Tia data is protected by client-side encryption (CSE).  

Session Insights and Tia use a public key obtained by the client over HTTPS. The corresponding private key is stored in encrypted form and additionally safeguarded by a key held in a certified Hardware Security Module (HSM). Before use, the private key is securely unwrapped within the HSM, and all HSM operations are fully audited.

  • Session Insights: Outputs (summaries/action steps) are stored in the tenant cloud and access is governed by user permissions via Admin Settings.
  • AI-assisted scripting: Generated scripts are stored in the tenant cloud Script Database for review, reuse, and controlled execution.  

We store tenant data for the duration of your contract, unless you choose to delete it beforehand. Customers can delete stored AI-related artifacts, such as Session Insights and scripts, in bulk at any time via administrative controls.

Upon termination of the contract, customers can export or extract their data in accordance with applicable requirements, including the EU Data Act where relevant. After termination, TeamViewer will handle remaining data according to the agreed retention and deletion process, subject to applicable legal and contractual retention requirements.

Data is anonymized on-device before broader processing and undergoes additional anonymization in cloud-based systems. Before and after cloud-based processing services are used, data remains handled in encrypted form. Any decrypted processing is limited to what is technically necessary to generate the requested output, and data is not retained in decrypted form beyond service delivery.

No. TeamViewer does not use customer data to train AI models. If we ever consider using fully anonymized data to train our AI models, this would only be done with strict safeguards, clear transparency, and updates to our documentation and applicable terms before any change takes effect.

Any such change would be reflected in our documentation and applicable terms.

Prompts and outputs are processed only within the TeamViewer service environment and are not shared with other customers. They are not used to train or improve external AI models. TeamViewer may use fully anonymized data to monitor the performance and reliability of its services, always with appropriate safeguards in place.

Tia uses information from within the TeamViewer environment, including:

  • A device data snapshot automatically taken at the start of the session
  • Screenshots only when explicitly triggered by the user
  • In-session conversation context
  • Past Session Insights, where the user has permission to access them

This is because system resource information is part of the device data snapshot that is supplied to Tia during a session.

Tia receives a limited technical device data snapshot automatically at the start of the session to support troubleshooting. This snapshot may include:

  • Hardware information such as device model, manufacturer, serial number, CPU, RAM, storage, graphics, monitors, network adapters, and BIOS version
  • Operating system information such as OS name, version, build, language, locale, time zone, and system uptime
  • Installed software and drivers, including application and driver names, versions, publishers, and install dates
  • System services, including service name, type, and state
  • Network and security state, such as adapter type/model, local IP/subnet, and firewall active state
  • Battery and device status, such as battery charge level and device uptime
  • Non-personal user environment settings, such as language and time zone

It does not include user content, personal documents, communication data, keystrokes, or activity data.

At the start of the session, Tia automatically receives a limited device data snapshot containing technical system and configuration information needed for troubleshooting. Additional context, such as screenshots, is only retrieved when explicitly triggered by the user.

None of the AI features take automatic action on the device. Tia does not have real-time access to the device, neither in read-only nor in write form. It operates on a limited technical device data snapshot taken at the start of the session and on screenshots only when explicitly triggered by the user. Tia can provide guidance and recommendations, but it cannot execute code, install software, or modify systems.

In the scripting workflow, AI can generate script drafts, but execution remains under user control and approved workflows. Scripts are stored for review and can only be executed through permission-controlled Remote Scripting.

Admins can switch AI functionalities on/off via AI Admin Settings.

Yes. Administrators can control AI usage through Admin Settings with granular access boundaries, including:

  • By user or role: Manage who can access AI features and AI-generated outputs via User Permissions.
  • By device or group: Define where features are enabled using Policies (for example, exclude specific devices or device groups from Session Insights data collection).
  • By feature: Switch AI capabilities on or off via AI Admin Settings.

This allows customers to roll out TeamViewer AI in a least-privilege way and align usage with internal governance requirements.

You can validate outcomes through controlled artifacts:

  • Session Insights summaries/action steps (permission-controlled)
  • Generated scripts stored in the tenant script database (reviewable/reusable)
  • Tia context sources are limited to the defined inputs (chat context, device data snapshot, user-triggered screenshots, permissioned Session Insights)

Tia is limited to a defined set of technical device data, user-triggered screenshots, in-session conversation context, and permission-based Session Insights. It does not collect user content, personal documents, communication data, keystrokes, or activity data. The technical device data it uses is information that a support agent could also view during a standard remote support session.

Through layered anonymization (on-device and additional cloud-based anonymization), data minimization, and admin controls that govern rollout and access to AI outputs.

Request via TeamViewer’s Trust Center: compliance.teamviewer.com.

Request via TeamViewer’s Trust Center: compliance.teamviewer.com.

Yes. TeamViewer has established an AI Management System (AIMS) aligned with IEC 42001. Certification is in progress (not yet completed as of the date of this document).

In addition, TeamViewer follows recognized security best practices as part of our secure development lifecycle, including relevant guidance from OWASP (including LLM security guidance where applicable), NIST, and CIS benchmarks, as well as recommendations from the Cloud Security Alliance (CSA) for AI security and governance.

TeamViewer uses a layered defence approach to reduce the risk of misuse and abuse of AI capabilities. Depending on the workflow, protections may include:

  • Prompt injection protections and jailbreak resistance measures
  • Hallucination mitigation (for example, guardrails and constrained tool access)
  • Rate limiting, abuse detection, and monitoring
  • Logging to support investigation and governance
  • Human override and approval controls, for example permission-controlled script execution