The Valuable Cheap Privacy

May 19, 2019

Recently I’ve been in a project on Australian small loan market. Some fund managers and industry stakeholders actually pointed out some very intriguing perspectives when talking about this small loan business in emerging and developed countries. One that I’m interested in is the value of privacy, to the individual and to the business.

The small loan business

First, we need to acknowledge that there’s long been the demand for small loans. Traditional financial institutions, however, cannot provide the service due to various reasons. Although many people dislike small loan providers for the high interest rate, which sometimes can be as high as 48% p.a., there is a very good case study in Trends in the Australian Small Loan Market that illustrates why small loan can be the best alternative in certain scenarios, especially for the poors.

However, the limited customer base for small loans implies that the competitive edge of a successful small loan provider should be its ability to retain customers as around 75% of revenue is generated from repeating applicants. Another critical factor is risk management, or the model used to predict default and produce the best loan product parameters such as amount, duration and pay frequency, etc.

Risk management is a challenge

A common issue to small loan providers is that they cannot reliably estimate the default probability of an applicant. Pretty much all that can be used are Equifax (formerly Veda) score, applicant’s 90-day bank transaction history and some standard personal information such as gender, living address, marital status, etc. In Australia we also don’t yet have a national database of personal level small loan borrowing activities so that the information held by each provider is not shared with other providers. Every provider has to build, test and tune its own model based on the same data.

As far as I know, these default prediction models work well, but sometimes have to impose some restrictions on the input. For example, the model can turn down any applicant whose Veda score is below certain threshold conditional on other parameters. It’s not that the loan providers don’t want to serve the customer, but that they cannot correctly price the risk when it’s beyond some level in the tail. Overfitting the model is never a good idea so better give up those applicants. It’s a lose-lose.

More data is always a solution

The solution to the challenge is fundamentally simple – add more variables to the input dataset, and maybe these variables can provide some extra explanatory and predicting power as to the applicant’s default probability. Practically, such variables enable a third party to know more about the applicant, which come at the cost of privacy. The more details you give to others, the better they know you.

For example, a stakeholder I talked to mentioned that if loan providers are allowed to track the geographical location of applicants (bowrrows) before (during) the loan application (term), they can better assess the loan risk:

  • Having a stable living address, or staying overnight at the address as per reported by the applicant.
  • Working 996, or staying at workplace till late.
  • tons of other things implied.

Another example. If contacts can be uploaded and analysed, then the lender can assess the quality of the applicant’s social network. If SMSs and emails can be scanned, textual analysis can further reveal who are those close to the applicant, what are their profiles, how do they view the applicant, etc. If social media accounts are disclosed, the lender can do even more.

The whole idea is that though we are not shaped by the data we generated, but the data we generated can be used to shape who we are.

Trade privacy for money?

I certainly hate the idea that someone knows everything about me, even it’s for my convenience sometime. It’s terrifyinng when Google shows an ad about something I only think to myself. But it’s also annoying to watch completely irrelevant advertisements. So, I pay for not losing my privacy. The cost can be either monetary or the inconvenience and more.

However, for those who are in real need of cash, is it justified to allow them to trade their privacy for money? I don’t know. To me, privacy has a value and it’s not cheap. But for those having no choices, can we say for them that their privacy is also of more private value than the potential real cash that can be used for a better living?

Nevertheless, Internet has a memory and so pretty much one can only trade his or her privacy for once, if they can and want to. The pricing of the privacy is going to be another hard problem. Same data can be of very distinct value to different businesses. The entity that acquires the data can potentially re-sell it to others, if possible.

An independent brokerage to manage personal data?

Well this is now just some crazy ideas that I have.

Can we have a trustable independent brokerage agent that manages our personal data, and monetise it in the best interests of ourselves?

For example, maybe I can sync all my personal data to this agent, including for example my geographical location tracking, health, search history, browsing activities, computer usage, contacts, texts, etc., and whenever a company want to use the data, it has to pay for it. The price dependes on the purpose of the usage (advertising, suggestions, etc.), times (once-off, repeating, etc.), and more.

Of course, these things should be done without requiring us to trust the agent because it should be based on a trustless mechanism. Sounds like a blockchain sort of stuff. It would be great if some smart contracts can play such a role. It’s crazy, but it could be a thing.

comments powered by Disqus