On Artificial Intelligence and Data Mining: Dangers, Benefits and Staying Safe
(Posted on Monday, October 30, 2023)
For all the chatter and anxiety about artificial intelligence (AI) and the chaos it could unleash in academia, journalism, Hollywood, and politics, one of the quieter yet more ominous uses of the technology has been going on for years: data mining using machine learning.
The growing practice of AI data mining has given rise to concerns surrounding the protection and confidentiality of personal information, such as user credentials, personally identifiable information, and more.
So, the big question becomes… How truly private can our private lives be when everything we connect to, and every decision we make is examined for patterns and tendencies, which are used not only to sell us things but to predict our actions?
Understanding AI Data Mining
AI data mining involves the extraction, analysis, and interpretation of vast amounts of data using machine learning algorithms. By sifting through colossal datasets from sources like big business, the Internal Revenue Service, and banks, AI systems can identify patterns, trends, and correlations in people’s behavior that would be impossible to see otherwise.
That’s not necessarily a bad thing. Data mining and predictive analytics are what allow Amazon to recommend products to you based on your past buying habits, and it’s what gives Google’s search software the seemingly preternatural ability to intuit what you’re searching for with just a word or two. The insights from data mining can also enable businesses and organizations to make informed decisions, enhance product development, and get ahead of trends.
The concerns arise when AI data mining becomes too intrusive, or when bad actors use AI to manipulate large amounts of data sets that are either stolen via a data breach or available for purchase on the dark web.
As Jelena Hoffart writes in her Medium article, The Current Identity Verification Tech Stack Won’t Survive AI and Real-Time Payments, she shares “The reality is that while AI will cause massive gains in productivity, efficiency, engagement, and personalization, the total picture is likely to be much less rosy. AI will also cause massive “gains” in cyber-attacks, scams, fraud, and data breaches”.
The rise in fraudsters leveraging AI as part of their cyberattack bag should be concerning to all of us.
As I write in Privacy Pandemic, there’s an inverse relationship between privacy and convenience in the digital age. Having more of one means having less of the other. Most of us have chosen to surrender of our privacy and security in return for speed, convenience, variety, and savings.
But, are we really saving? According to Juniper Research, eCommerce Fraud Losses will exceed $48 Billion globally in 2023. This is a 16% growth in eCommerce fraud in the past 12 months. So, while we love the speed and convenience of the internet, will we as global consumers continue to live in the status quo or re-using the same passwords, not adding multi-factor authentication to our digital accounts, and blindly trusting every online brand that they have proper security measures in place?
AI and Machine Learning tech can be beneficial, however, if in the wrong hands, it can go too far, and these are some of the ways:
- Data breaches. Now we’re getting serious. The simple reality of data mining is that there are terabytes of data records out there with your name on them, whether you know it or not. Each database and record represents a potential access point for hackers and cybercriminals. There’s a direct correlation between the amount of data in the wild and the risk of data breaches, because if just one out of ten companies has sloppy security and authentication procedures, or we as consumers of these products and services don’t update our user credentials on a regular basis, unauthorized individuals can gain access to your information, raising the risk of identity theft, fraud, and damage to your reputation.
- Government intrusion. Data mining, coupled with surveillance technology, raises concerns about the abuse of power by governments and law enforcement agencies. This can lead to unwarranted surveillance that erodes civil liberties and individual freedoms.
- Discrimination. AI algorithms can amplify existing biases present in a dataset, which could lead to discriminatory practices such as being wrongly put on a no-fly list, or identified as a criminal actor by law enforcement.
Staying Safe From “Mine Disasters”
Data mining isn’t going away; it brings the organizations that use it too many benefits…and frankly, it does offer consumers a lot of benefits, too. But we can’t allow it to threaten our privacy. Here are some ways we can fight back:
- Transparency. Organizations should be required by law to be transparent about their data collection practices, and some are. Those regular letters or emails you receive from your bank or other companies—the ones you never read—are them being transparent about how they use your data. Read them. Learn about the types of data being collected, what the company will use it for, how long it will be retained, and what your rights are. Object to practices you don’t like, and if you’re not satisfied, take your business elsewhere.
- Strong encryption. AI data mining should always be with robust encryption methods and technologies to secure personal data during storage and transmission. Encryption ensures that even if a breach occurs, the stolen data is unusable. Insist that any company you do business with have strong encryption policies.
- Anonymization. All personal identifiers should be removed from datasets, making them anonymous, before they are used. This is an effective way to protect individuals’ identities and prevent identity theft.
- Ethics. AI is uncharted territory, but there are starting to be calls demanding that companies begin addressing biases in their algorithms, conducting regular audits, and obtaining informed consent from consumers before collecting and using their data. Insist on this from your service providers and make noise in the media and to your elected officials if you don’t get satisfaction.
- Education. Terms of service and privacy policies make for boring reading. But if you don’t know what your online retailer or your medical group are doing with your data, you’re powerless. Read everything you can about how companies are mining your data, and if you don’t like what you find, stop doing business with them and let your friends on social media know what you’ve done.
The power is yours if you choose to use it.
Stay Safe.
Christopher A. Smith is the author of Privacy Pandemic: How Cybercriminals Determine Targets, Attack Identities, and Violate Privacy—and How Consumers, Companies, and Policy-Makers Can Fight Back—release date November 7th, 2023 from Amplify Publishing.