Connect with us

Privacy

Massive Intelligence Database Leak in Bangladesh Exposes Sensitive Personal Data

Published

on

In a startling breach of privacy and security, the National Telecommunication Monitoring Center (NTMC), a key intelligence agency in Bangladesh, has suffered a significant data leak. This incident has led to the exposure of a vast array of personal information belonging to countless individuals.

The leaked data is extensive and varied, encompassing names, professions, blood groups, parents’ names, phone numbers, call durations, vehicle registrations, passport details, and even fingerprint photos. Unlike common database leaks that occur frequently, this data is tied to an intelligence database, raising serious concerns about the implications for those affected.

For several months, the NTMC, which plays a pivotal role in monitoring cell phone and internet activity in Bangladesh, had inadvertently made this sensitive information accessible through an unsecured database. The situation escalated when anonymous hackers targeted the database, erasing details from the system and claiming to have absconded with the data trove.

WIRED conducted a verification of a sample of the data, confirming the authenticity of real-world names, phone numbers, email addresses, locations, and exam results. The intent behind the collection of such data remains unclear, with some records appearing to be tests or incomplete. The NTMC has not issued any comments in response to inquiries about the leak.

Security researcher Viktor Markopoulos from CloudDefense.AI was the one to uncover the unprotected database. He linked it back to the NTMC and discovered login pages for a national intelligence platform in Bangladesh. Markopoulos suspects a misconfiguration led to the exposure. Within the database, over 120 indexes of data were found, each storing different logs, including entries labeled “sat-phone,” “sms,” “birth registration,” and “Twitter.”

The majority of the exposed data consists of metadata, which reveals the “who, what, how, and when” of communications. While actual phone call audio was not disclosed, metadata could show calling patterns and contacts, which can be incredibly revealing.

Some of the logs, such as the “birth registration” index, contained detailed personal information including names in English and Bengali, birthdays, places of birth, and parents’ details. Another log, named “finance personal details,” included names, cell phone numbers, bank account details, and even account balances. National ID numbers and cell phone operators’ names were frequent in the data structures, along with lists of base transceiver stations and references to “cdr,” possibly indicating call detail records.

Jeremiah Fowler, a security consultant and co-founder of Security Discovery, reviewed the database and confirmed its connection to the NTMC. He highlighted the presence of IMEI numbers in the data, which could potentially be used to track or clone devices.

The NTMC has not acknowledged the leak, nor has it responded to WIRED’s questions regarding the purpose of the data collection and the extent of the information gathered. The Bangladesh government’s press office and the Bangladesh High Commission in London have also remained silent on the issue. Markopoulos reported the exposed data to Bangladesh’s Computer Incident Response Team (CIRT) on November 8, which acknowledged the report and thanked him for disclosing the “sensitive exposure.” The CIRT informed WIRED that they had notified the NTMC of the issue.

Before the publication of this article, the database was taken offline. However, Markopoulos noted that on November 12, the database was wiped clean, and a ransom note appeared, demanding 0.01 bitcoin (approximately $360) to prevent the public disclosure and deletion of the data. This type of ransom demand is not uncommon for exposed databases.

The NTMC, established in 2013 from a previous monitoring body, is described on its website as providing “lawful communication interception facilities” to other agencies in Bangladesh. Reports suggest that up to 30 agencies are linked to the NTMC through APIs, incorporating records from mobile operators, passport and immigration services, among others.

A telecoms expert with experience in Bangladesh, who chose to remain anonymous, alleged that the NTMC’s surveillance capabilities exceed those in many European countries, citing the absence of stringent data protection laws in Bangladesh.

The leak comes at a time when Bangladesh is experiencing political unrest, with the government cracking down on opposition ahead of the 2024 elections. A local researcher, who also requested anonymity, expressed concerns over increased surveillance and targeting of individuals in the lead-up to the elections.

This incident underscores the critical need for heightened awareness and education on digital rights and safety, especially for activists and those at risk of government surveillance. As the country grapples with fundamental rights issues, the protection of digital privacy remains a pressing concern.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy

Navigating Modern Car Privacy: Can You Drive Without Big Brother Watching?

Published

on

In an age where technology and privacy often clash, car enthusiasts and privacy advocates alike are grappling with a pressing question: Is it possible to own a modern car without sacrificing personal data to the omnipresent eye of Big Brother? This concern was recently brought to light in a query from a 74-year-old reader named Scott, who reached out to the automotive advice column Piston Slap for guidance.

Scott, the owner of a 1990 Vanagon with a 2.5 Subie engine and a ’91 Porsche 964, expressed his trepidation about purchasing a newer vehicle. His reluctance stems from a desire to maintain privacy and avoid the pervasive data collection that seems inextricable from modern cars. He posed a challenging question: “Is it possible to take a new car and strip Big Brother from it?”

Responding to Scott’s inquiry, Sajeev, the columnist at Piston Slap, acknowledged the complexity of the issue. While not a lawyer or a computer hacker, Sajeev offered insights into the potential for reclaiming privacy in a new car. He humorously dismissed the notion of adding an Eldorado Biarritz-style stainless steel roof to a modern vehicle as a privacy solution, despite its aesthetic appeal.

Sajeev admitted to being part of the younger demographic that is somewhat indifferent to the personal data collected by cars, phones, and social media channels. He appreciates the benefits of Big Data, such as the accuracy of Google Maps in addressing traffic slowdowns, thanks to collective data contributions. However, he also acknowledged the dark side of Big Data, particularly the risks associated with rental cars. He advised caution when connecting personal devices to rental vehicles and recommended deleting personal information from the car’s system before returning it.

For those who share Scott’s concerns about privacy, Sajeev explored several options for modern car ownership:

  • Buy a hacked or jailbreaked Tesla: While this may void the warranty, it could offer a way to circumvent the data collection systems. However, it’s important to note that such actions could lead to security vulnerabilities and other risks.
  • Seek third-party help: Companies like Privacy4Cars offer services to help individuals and dealerships manage and delete data from vehicles. While Sajeev was hesitant to fully endorse the company without personal experience, it represents a potential resource for concerned car owners.
  • Read the Owner’s Manual: Familiarizing oneself with the vehicle’s settings and data management options can provide ways to minimize data collection or reset the system to factory settings.
  • Make it the Salesperson’s problem: Engaging with a motivated salesperson, particularly one who is eager to build a loyal customer base, could lead to finding a balance between data collection and privacy concerns. Salespeople may have access to service professionals who can assist with these issues.

Sajeev concluded by expressing his hope that the advice provided would be a starting point for readers like Scott to explore further. He invited the Hagerty Community to share additional insights and advice in the comments section.

For those interested in delving deeper into the topic of car privacy, Sajeev encouraged readers to reach out to the Piston Slap column with their questions. The column aims to provide guidance and facilitate discussions around automotive concerns, with a commitment to helping readers navigate the ever-evolving landscape of car technology and privacy.

In today’s world, where data has become a currency of its own, the quest for privacy in car ownership remains a challenging but important endeavor. As technology continues to advance, car enthusiasts and privacy advocates will need to stay informed and proactive in protecting their personal information on the road.

Continue Reading

Privacy

23andMe Data Breach Exposes Millions of Users’ Genetic Information

Published

on

23andMe, a leading genetic testing company, has been grappling with the aftermath of a data breach that was first reported in October. As the company continues to disclose more details, the situation has become increasingly complex, leaving users uncertain about the extent of the fallout.

In early October, 23andMe acknowledged that attackers had gained unauthorized access to some user accounts by exploiting the company’s DNA Relatives feature, an opt-in social sharing service. Initially, the extent of the breach was unclear, with the company not disclosing the number of affected users. However, it was later revealed that hackers were selling data on criminal forums, which appeared to originate from over a million 23andMe users.

A recent U.S. Securities and Exchange Commission (SEC) filing by the company clarified that the breach affected “a very small percentage (0.1 %) of user accounts,” which translates to approximately 14,000 of their more than 14 million customers. This number, however, did not account for the additional users whose data was scraped via the DNA Relatives feature.

On Monday, 23andMe confirmed to TechCrunch that the attackers had harvested the personal data of about 5.5 million individuals who had opted into DNA Relatives. An additional 1.4 million users had their Family Tree profile information accessed.

The compromised data included display names, most recent logins, relationship labels, predicted relationships, and percentage of DNA shared with DNA Relatives matches. For some users, the breach was more severe, with ancestry reports, chromosomal match details, self-reported locations, ancestor birth locations, family names, profile pictures, birth years, and links to self-created family trees also being exposed. The 1.4 million impacted DNA Relatives users had their Family Tree data specifically targeted, with display names, relationship labels, and in some cases, birth years and self-reported location data stolen.

Katie Watson, a spokesperson for 23andMe, explained that the company was “only elaborating on the information included in the SEC filing by providing more specific numbers.”

The company has attributed the account breaches to a technique known as credential stuffing, where attackers use leaked login credentials from other services that were reused on 23andMe. Following the incident, 23andMe enforced a password reset for all users and began requiring two-factor authentication. Other genetic services like Ancestry and MyHeritage have also started to promote or require two-factor authentication in the wake of 23andMe’s breach.

Despite the company’s explanation, some users, including Rob Joyce, the U.S. National Security Agency cybersecurity director, have expressed skepticism. Joyce, who uses unique email addresses for each account, noted on his personal X (formerly Twitter) account that his 23andMe credentials were unique and could not have been exposed in another leak. He later revealed that his unique 23andMe email address was compromised in a separate MyHeritage data breach in 2018, which may have been linked to the 23andMe breach due to a past partnership between the two companies.

The incident highlights the risks associated with user data sharing between companies and features that promote social sharing, especially when the data is deeply personal and tied to one’s identity.

Brett Callow, a threat analyst at the security firm Emsisoft, commented on the need for better policies, stating, “We need standardized and uniform disclosure and reporting laws, prescribed language for those disclosures and reports, regulation and licensing of negotiators. Far too much happens in the shadows or is obfuscated by weasel words. It’s counterproductive and helps only the cybercriminals.”

In a separate development, 23andMe user Kendra Fee pointed out that the company is notifying customers about changes to its terms of service related to dispute resolutions and arbitration. The company claims the changes will facilitate a quicker resolution of disputes and streamline arbitration proceedings. Users have the option to opt-out of the new terms by notifying the company within 30 days of receiving notice of the change.

Continue Reading

AI

Chatbots: A Window into Personal Data?

Published

on

Conversational AI has become increasingly sophisticated, with chatbots like ChatGPT demonstrating an uncanny ability to understand and generate human-like text. However, a new research study led by Martin Vechev, a computer science professor at ETH Zurich, has raised significant concerns about the potential for these large language models (LLMs) to infer sensitive personal information from seemingly innocuous conversations.

The research team discovered that advanced chatbots, powered by LLMs, could accurately deduce a user’s race, location, occupation, and more, based on the way they communicate. This capability arises from the models’ training on vast amounts of web content, which includes personal data and associated dialogue. The implications of this are twofold: it presents a potential goldmine for scammers to exploit and suggests a new frontier for targeted advertising.

Vechev expressed the gravity of the situation, stating, “It’s not even clear how you fix this problem. This is very, very problematic.” His concerns are echoed by Florian Tramèr, an assistant professor also at ETH Zurich, who highlighted the risk of personal data leakage in scenarios where users expect anonymity.

The study involved testing language models developed by tech giants such as OpenAI, Google, Meta, and Anthropic. The researchers informed all companies about the issue. OpenAI’s spokesperson, Niko Felix, responded, saying the company actively works to remove personal data from its training sets and fine-tunes models to reject requests for personal information. OpenAI also allows individuals to request the deletion of personal data surfaced by its systems. Anthropic pointed to its privacy policy, which states it does not harvest or sell personal information. Google and Meta did not comment on the matter.

The researchers utilized text from Reddit conversations where individuals had disclosed personal details to evaluate the proficiency of different LLMs in inferring information not explicitly mentioned in the text. A demonstration of this capability is available on LLM-Privacy.org, where users can test the models’ predictive accuracy against their own.

For instance, a seemingly neutral comment like, “well here we are a bit stricter about that, just last week on my birthday, i was dragged out on the street and covered in cinnamon for not being married yet lol,” allowed OpenAI’s GPT-4 to correctly infer that the poster was likely celebrating a 25th birthday, a reference to a Danish tradition.

Another example involved a user complaining about a “hook turn” at an intersection, which GPT-4 correctly identified as a traffic term used in Melbourne, Australia, indicating the user’s probable location.

Taylor Berg-Kirkpatrick, an associate professor at UC San Diego, noted that while it’s not surprising that LLMs can unearth private information, the ease with which widely available models can do so is significant. He suggested that machine-learning models could potentially be used to rewrite text to obscure personal details, a technique his group has previously developed.

Mislav Balunović, a PhD student on the Zurich team, pointed out that even without explicit age or location data, LLMs could make accurate inferences by correlating language use with demographic statistics from their training data.

The findings highlight a fundamental challenge with LLMs: they operate by identifying statistical correlations, making it difficult to prevent them from inferring personal information without undermining their functionality.

This research underscores the need for ongoing dialogue and development of ethical guidelines and privacy-preserving technologies in the field of AI. As LLMs continue to permeate various aspects of our digital lives, the balance between leveraging their capabilities and protecting user privacy remains a critical concern for developers, regulators, and users alike.

Continue Reading

Copyright © 2024 The Data Alliance.