Undermined: Explaining Why Data Collection and Mining can be Harmful


You were mentioning HIPAA as means of protecting people, but there are a lot of ways to get at health data, and the HIPAA protection only goes so far, as this Berkeley news article describes:

Advancement of artificial intelligence opens health data privacy to attack

1 Like


I see what your saying, it’s complicated. I guess I meant was actual diagnosis and results of tests etc… these things are secure. But the data metrics from wearables, voice data mining etc… this could be nightmarish. I may have mentioned this on the forum before- but a colleague of mine said she was talking to her friend about her migraine headache., well about an hour later she was hit with multiple ads for migraine medicine. This colleague had no idea what data mining and privacy was until that moment.

With that said- hospital and pharmacy prescription records are secure-- HIPPA can’t protect what we don’t keep private ourselves. The medical field is laden with regulation it’s hard to get previous records on patients even when they are desperately needed.

Scientifically speaking, no person can paint a medical picture from datamining. It can’t be proved in court, no real professional expert witness could prove it- they would lose their license. There are laws that protect people from insurance exclusion based on previous diagnosis too.

There is concern about the unknown- but actual real medical information will not be leaked out without consequences.

The medical field is very slow to adapt to technology- so when AI is used in medicine- it will be for the treatment of disease. The ethical and federal standards create firewalls- that are very hard to break down even with new discoveries.



While this is true, nothing has to be proven in court. The conclusions of the AI about your medical state are consumed and judged upon behind the scenes without one being aware of this. An employer not hiring you, because of higher-than-usual health risk scoring will reject you making up some different story.



I see what your saying now, discrimination at its worst! SFBayarea has some pretty fierce employment lawyers and board of supervisors… but now I think I’ve been living in a bubble that’s going to pop one day…

So with location tracking and wearables it can be known whether people are actually working out at the gym or just visiting with people!!

1 Like


Thanks for welcoming my advice, which probably just sounded grumpy. :slight_smile: I am up to my eyeballs in pro bono and “low bono” work—extremely low pay or volunteer work to make the world a better freakin’ place. Saving the local farmers market. Writing and placing articles to influence local and regional policymakers. Lots of arts, community, and kids-oriented work.

What I’d suggest, if folks here are serious about all this, is that you hire a firm to make a truly kick-ass communications and/or brand platform, before wasting any more time and energy with the scattershot method. I don’t have the bandwidth to do it pro bono. This is worth spending real money on. Plazm, for example, would probably give you something solid to start with at a highly reduced nonprofit rate—for a smallish investment, you should be able to get a whole lot of strategic & comms work done.

My guess is that multiple sub-platforms might be necessary, based on personal development. You need a whole different package and approach to reach Joe Americana, compared to the approach that would reach a Silicon Valley indentured servant to technology, or a Liberal Bubble dweller.

These forums seem likely to help the latter two formulate messaging and spread it among their own communities. That’s definitely a good start! But for the larger-scale stuff, just hire some professionals and do the thing right. Saves so much money and time in the long run…



Yo I got this book and started reading it. All of this data mining stuff is making so much more logical sense now.

1 Like


I was finally able to explain this to my friend (and have her believe me) when it comes to data mining. I started with a “Did you know that…” example about credit scores. I also explained how that could effect her getting a job ect. She said it sounded like a conspiracy theory, but that she understands. I also showed her Privacy Badger and all the different trackers that it blocked. We have a breakthrough everyone!

She did, however, ask why it mattered that her data could be used via a credit or health score if she was not doing anything wrong on the Internet.



@Siddhi It’s a start! I definitely have those moments with my colleagues-

Just a thought- on your oroject you could have a sample dialogue or interview explicitly about someone inquiring about datamining this is important to understand. Or a comic- graphic novel page- ever heard of “Valley Girl”? You could use the Valley Girl language to the extreme on why do I care about my personal data?

Anyways… progress made one step at a time…

1 Like


Yes! That would be a cool idea! When I get farther along I will definitely try that.



That is it - besides automated systems making wrong or biased decisions - it is unknowable whether it is right or wrong. Someone else is now the decider based on own criteria. And they will judge you by it, now and forever. Your physical, mental and emotional state.

Bossy in communications? —> You are not a teamplayer —> Sorry, job not for you
Online at night, typing quickly, making mistakes —> You can’t handle stress, update life expectancy —> Sorry, you are not admitted to this insurance plan

And when you encounter real mental illness or it is interpreted you have, then you get permanently labeled: Bipolar, depressed, sociopath, etc.



So what sorts of regulatory options are there? If tech is evolving too fast for the government (as seen in the embarrassing Facebook-Twitter Congressional hearing) how can we reconcile that?

1 Like


This is the gray scary area that beholds us who understand the data mining monster potential. Only this monster under the bed is actually real!

@aschrijver made this point in another post with healthcare. Yes indeed we are protected with HIPPA in the US- but what about the info from wearables and map locators- insurance companies and potential employers could paint a picture about you and face discrimination before they even meet you- a lost opportunity.

Could one regulate this based on “digital slander”?



This article talks about benefits from wearables for both employers and employees.

Many consumers are under the mistaken belief that all health data they share is required by law to be kept private under a federal law called HIPAA, the Health Insurance Portability and Accountability Act. The law prohibits doctors, hospitals and insurance companies from disclosing personal health information.

But if an employee voluntarily gives health data to an employer or a company such as Fitbit or Apple — entities that are not covered by HIPPA’s rules — those restrictions on disclosure don’t apply, said Joe Jerome, a policy lawyer at the Center for Democracy & Technology, a nonprofit in Washington. The center is urging federal policymakers to tighten up the rules.



Totally agree on this- once medical information is in the medical record it becomes secure (that is if there is no data breech). But any data collected by any company is free game even for insurance companies. There is no privacy guaranteed for information gathered outside of organized healthcare like clinics, hospitals or pharmacies. Inside these institutions HIPPA covers it.



This company does a similar thing, but for truck drivers. It tracks their every move for productivity, but there could be issues with it.

On another note, my friend has started noting how Facebook had very targeted ads and decided to delete the app and use Opera Touch. Lots of progress.



@Siddhi, I found another very nice depiction of how you are undermined when giving out personal data over the internet. It is part of this cool article:

And here is the picture:

1 Like


This is perfect. If I ever become a teacher I will hang this in my room. Even if I don’t, I’m still going to hang it in my room :smiley:

1 Like


Fast Company’s article on data brokers:

There is another infographic in it about personal data that is harvested from you (in this case by one provider called acxiom):

Also chilling was the suggestion in the following paragraph that databases could be used by stalkers and violent spouses:

Piles of personal data are flowing to political parties attempting to influence your vote and government agencies pursuing non-violent criminal suspects. Meanwhile, people-search websites, accessible to virtually anyone with a credit card, can be a goldmine for doxxers, abusers, and stalkers. (The National Network to End Domestic Violence has assembled a guide to data brokers.)



Hi all, I saw this topic and had to come and chime in. To me, data privacy is a very important issue and I try to protect mine as best as I can. However, most people I know outside of the very techhy, privacy oriented ones don’t seem to really care at all; they brush it off as just the way it goes and tend to justify it by claiming that they have nothing to hide.

I think that these generalizations are the most dangerous things floating around out there, because even when shown the cold hard facts about data mining and everything associated with it these people still just brush it off.

So I think that the biggest challenge that we face in spreading awareness is not just presenting the facts but doing so in a way that causes people to change their underlying mindsets. In my experience, I’ve talked to a lot of smart and knowledgeable people who understand that their data is a commodity but don’t see the danger of it/don’t want to give up any conveniences.

I’m wondering what anyone thinks about how to best express what the real world consequences are without sounding totally Orwellian about it. For example, I remember reading about a study done in China that used facial recognition and AI to analyze faces of criminals in order to determine the likelihood that someone else would become a criminal (https://www.rt.com/news/368307-facial-recognition-criminal-china/). Though it was widely regarded as scientifically baseless it is not hard to see future research going towards similarly dangerous topics that abuse the data we’re just handing out.



I fully agree with you, @joshs! This thread contains just some insights that people wrote ‘off the top of their head’. As part of our new mission and vision we’ll be going to tackle this with awerness campaigns that address your concerns, and going about it with a mindset and approach of positivity and optimism. I.e. in a solution-oriented way, so that we show a way forward, and are not mired in doom & gloom.