Selling Data is Selling Soul

August 15, 2019 Artifical Intelligence, General Insights No Comments

… if the data are personal and in a big data context. It’s like a Faustian deal, but Faust only sold his own soul. Where is Mephistopheles?

Big data + A.I. = big knowledge

Artificial Intelligence is already so powerful that it can turn much data (passive, unrelated) into knowledge (active, related). ‘Knowledge is power’ becomes ever more scary in this setting. It’s about big knowledge.

And it has barely begun.

This is knowledge about how humans can be influenced… and manipulated.

Guess what a party intends to do when paying for data?

An advertising company wants to make people buy more stuff, irrespective of real needs. Whether this extreme way of putting it here is realistic or not, isn’t relevant. It’s realistic enough.

In politics, in religion, in amassing just plain personal power… You name it. The possibilities for manipulation are real and underway, if not already realized somehow.

And it has barely begun.

Gone is privacy

As one dire side-effect.

The knowledge which can be extracted from personal big data nowadays is such that people can be emotionally targeted almost at the individual level without needing access to names or identification numbers of any sort. This is: you are being modeled quite accurately. Then this model is targeted. Not you? Well… the model is almost like your second identity.

As far as concerns the advertising company, it is you.

As far as I know, there are not yet any regulations in this respect.

With Lisa, this will be more important than ever

Lisa gets a deeper knowledge about people in general and individuals in particular.

Let’s not joke about it: this can be used for good and for bad. It can be used by Lisa herself and – eventually – by anyone who gets the data=knowledge.

This makes ethics immensely important!

AURELIS will never sell personal data

A pledge:

As part of AURELIS / Lisa ethics, we will never sell any user data, nor work with a company that does not explicitly state the same.

It may be clear why this is a very strict rule.

We forsake the income of selling data, even while we “could do much good with the money.” Faustian mythology makes us extremely careful.

This is a challenge within a very competitive world. We need to be better than merely idealistic. So, what we can – hopefully – gain from our stance, among other things:

  • developing a company culture of ethics, attracting good people with strong ethical motivation and who recognize each other in this
  • showing to the world that we mean it, thus building a position of trust and cooperation with other trustworthy organizations
  • being sustainable at long term, when legislation demands the strictest ethical rules in this respect with a need to be compliant to them from start on
  • being able to ask people to cooperate and co-create on this most ethical basis.

We look forward to other, domain-related companies doing the same.

Together, we are stronger against the data-selling competition. Competition between ourselves (the ‘good ones’, if I may) will only urge each one to excel.

That’s a very good thing.

Leave a Reply

Related Posts

From Compression to Prediction

Compressing information can intriguingly lead to enhanced predictive capabilities. This general scheme is recognizable in many contexts ― organic and artificial. Life itself Life can be understood as a local defense against universal entropy (chaos or heightening of chaos). Within any bubble of life, there is a concentration of resources to this aim. At the Read the full article…

Spirituality ― Key to Super-A.I.?

Spirituality is often seen as soft, emotional, even vague. Yet what people experience as spirituality may turn out to be a key to something far beyond that: intelligence which transcends logic, enhances coherence, and invites a depth that super-A.I. may come to rely on ― not as decoration but as power. This blog explores why Read the full article…

Global Human-A.I. Value Alignment

Human values align deeply across the globe, though they vary on the surface. Thus, striving for human-A.I. value alignment can create positive challenges for A.I. and opportunities for humanity. A.I. may make the world more pluralistic. With A.I. means, different peoples/cultures can strive for more self-efficacy, doing their thing independently and thereby floating away from Read the full article…

Translate »