Recently, a tech giant introduced an AI feature that takes screenshots and stores them locally on its upcoming AI PCs for later recall. Despite the company’s assurances regarding encryption, the UK Information Commissioner’s Office (ICO) is seeking more information on the product’s compliance and security. Privacy advocates have called this feature “privacy nightmare”leading the company to delay its release.
Par Ivan Lebowski, Sales Team Leader chez Synology
Recently, a tech giant introduced an AI feature that takes screenshots and stores them locally on its upcoming AI PCs for later recall. Despite the company’s assurances regarding encryption, the UK Information Commissioner’s Office (ICO) is seeking more information on the product’s compliance and security. Privacy advocates have called this feature “privacy nightmare”leading the company to delay its release.
This incident highlights broader concerns about the privacy of AI-powered technologies. Thus, public cloud photo management services often use facial recognition to organize photos, providing convenience but raising privacy concerns. Yet as companies emphasize encryption and user control, privacy advocates express concerns about data collection, potential misuse and the creation of facial recognition databases. Using a NAS guarantees optimal security.
For private cloud solution providers, data security and privacy are top priorities. Whether for a NAS (Network Attached Storage) or a private cloud, data ownership is a fundamental aspect. Customers must be assured that their data remains under their control and is not exploited for unauthorized purposes. Although AI capabilities are being integrated into ecosystems, it is essential to ensure that privacy rights are integrated into design principles, taking precedence over functional efficiency. Transparency is also crucial, allowing customers to know where their data is stored, how it is protected and who has access to it. By integrating these principles by design, suppliers can not only protect their customers’ data, but also increase their trust and satisfaction.
Article of the week
Ensuring data privacy with pre-trained AI models and private cloud architecture
Facial and object recognition techniques can be integrated into photo management applications to improve the management and search experience. These AI recognition models used in these applications are pre-trained with publicly available datasets during development, before any user interaction with the application.
Unlike SaaS photo management services that store user data in the solution provider’s data centers, all user data resides on their private devices, ensuring full data ownership. This means that user data cannot and will never be used for training AI models.
Likewise, surveillance solutions can integrate advanced AI features such as person and vehicle detection, facial recognition and object tracking, improving security by reducing false alarms. These pre-trained AI models work efficiently within the user’s network. This approach, combined with data encryption and access control settings, provides powerful monitoring tools while ensuring the privacy and security of user data, eliminating the risk of dependency on third-party cloud services.
Address ghost AI and data leaks with a dedicated admin console
Large Language Models (LLMs) can be integrated into office suites, allowing users to incorporate generative AI as a writing assistant for emails, slides, documents and spreadsheets.
To help organizations maintain privacy and mitigate the risk of shadow AI while using generative AI tools, an AI admin console can be introduced for system administrators. This tool provides granular control over which AI models and versions can be deployed within the organization, permissions for models allowed across organizations, groups or individuals, and de-identification settings including Region-specific information such as Social Security Number (SSN). This allows businesses to use AI tools securely without compromising sensitive data.
Robust AI customer support through GDPR-compliant de-identification processes
AI can also be used to improve customer service experiences. An AI support system can be built to extract data from a knowledge base consisting of anonymized past technical support data.
To ensure customer privacy is protected, past support data used to build the database undergoes a complete de-identification process. This includes methodologies such as Regex, Named Entity Recognition (NER), checksum validation, and contextual analysis to detect and anonymize sensitive information. Additionally, it uses query guardrails to prevent sharing sensitive information or unintentionally instructing users to delete critical data.
Ongoing Commitment to Data Privacy
Amid these developments, it is crucial to remain committed to prioritizing privacy. With privacy-first design principles and data security guidelines, users can trust that their data is secure and protected, allowing them to benefit from advances in AI without compromising their privacy.