By Karen Petrou
On February 13, bipartisan Senate Banking leadership asked for views on how best to craft a new consumer-data privacy and security framework. Reflecting 2017’s Equifax debacle, the inquiry seems rooted in the credit-reporting framework. Essential though it is, data-integrity fixes for the credit bureaus aren’t anywhere near sufficient protection now that consumer financial data are increasingly clutched in the hands of Facebook, Amazon, Google, and an array of lightly- or un-regulated technology-based consumer-finance providers. As we have demonstrated, sustainable, sound, and fair consumer credit is critical to economic equality.
Households with no margin of financial error and more debt than most can already repay under stress are uniquely vulnerable to financial products that over-look or over-charge them or that use personal data to pitch unsuitable products.
The U.S. consumer-finance system in 2019 is a Model T welded to a Maserati – that is, who can offer which products under what rules hasn’t changed much since 1987, when the still-permeable barriers between banking and commerce were crafted, and 1999, when financial services could be paired with taking insured deposits was laid out. The Dodd-Frank Act of 2010, for all it did to consumer protection via the CFPB, made few changes to the structure of U.S. finance.
It is thus possible for like-kind deposit, loan, payment, and advisory services to originate inside and outside the bank-regulatory framework, often with considerable safety-and-soundness risks to providers and the broader financial system. Some counter this assertion on grounds that only banks may offer FDIC-insured deposits or access Fed liquidity backstops, but backdoor access to these benefits combine with consumer assumptions that all providers of like-kind products have like-kind safeguards to make this difference increasingly nothing more than a statutory nicety.
With Senate Banking’s invitation in hand, this blog post turns to our prior calls for like-kind rules for like-kind products, laying out the questions Congress must quickly resolve if it hopes to craft a robust, forward-looking consumer-data framework that protects the most vulnerable households and their equality-essential financial data.
What’s the difference between innovative cross-selling and improper product tying?
Banks have demonstrated that cross-selling can be problematic even without use of big data. However, old-school cross-selling violations transgress clear rules and are subject to stringent, if sometimes belated, enforcement. But, unless cross-selling is fraudulent, deceptive, or can be considered “abusive,” no rules govern the ability of a non-bank to share the data a consumer provides for one service – e.g., a “free” look at a candidate’s views – with financial data provided by the consumer in a free search for credit – then to pitch a very lucrative investment vehicle based on a consumer’s beliefs and the means to back them. Is this innovative? Maybe, but it’s also highly problematic if the investment product is unsuitable for the consumer due to risk or cost or if it’s selected principally because it profits the platform company.
What pricing subsidies benefit consumers and which unduly profit the provider?
Bank holding companies are barred from tying – i.e., mandating that a consumer wanting a loan also purchase insurance. A raft of disclosures also applies when traditional banking products are marketed in conjunction with other financial services. Again, bank-regulatory enforcement could be better – way better. Still, for tech companies, only fraudulent, deceptive, or abusive behavior is barred – and perhaps not even then given limited FTC authority.
Technology-based finance providers thus could demand that you use their “free” social-media services if you want also to send money to your home-country relatives via a remittance service along the lines Facebook contemplates. What about “exclusive” products such as hot Christmas toys available only to consumers who also select a platform company’s credit card, installment loan, or other financial offerings? Many merchants discount goods if a consumer takes out a sponsored card. None so far can mandate that you buy something to get a loan or condition the product on taking out costly credit.
Where does advanced underwriting stop and bias begin?
Algorithms are built to accomplish the objective set for them by humans and only that objective unless or until machine learning is programmed up-front to achieve more than one goal. Assuming as one reasonably may that technology-based financial companies will be as profit-driven as traditional providers, how will they use their far deeper databases? Will it be to ensure compliance to rules about which coders may not know and do not care because their own incentives lie elsewhere?
It’s as illegal to discriminate in a tech-based finance model as a traditional one, but who’s to know? New York State has begun to require insurance underwriters to validate the non-discriminatory results of their underwriting models, which now include at least one claiming to provide life insurance based on nothing more than what an algorithm thinks of an applicant’s face. Are comparable standards warranted at the federal level and for all financial offerings, not just insurance? What if, for example, you knew about higher-yielding deposit options only if a tech company thought you worthy based on its data or what it made of them? Banks must accept funds from anyone who can see a sign on the branch door, may hear an ad, or is told by a friend about an interest rate on offer. When rates are based on nothing more than what one looks like or where one lives according to a geocoding model, who will get the best offer or, indeed, any?
Indeed, how much choice will consumers have with highly-personal data that are used to target, cross-sell, cross-subsidize, or otherwise enhance powerful platform-company profit engines?
Banks may not deny deposit services to anyone whose looks they don’t much like nor can payment services be withheld or loans withdrawn on any basis other than demonstrable credit risk. However, just as platform companies target employment ads only to white males or market short-term rentals to majority, non-disabled travelers, so too could certain financial products powered up by platform companies be offered only to or demanded from targeted populations.
Are banks all that much better?
In the wake of my recent Financial Times opinion piece on tech finance and equality, numerous comments complained that banks had done so much social ill that only tech providers could ensure financial inclusion. Nothing here or elsewhere in our tech-finance work defends banks – what we aim to do is illuminate the rules under which banks operate and to which they can be held accountable.
When banks use AI, ML, or whatever comes next to innovate, these rules still apply and regulators and the courts are there to enforce them. If the banks fail to comply or the government fails to make them do so, more shame on them. At least, though, there’s a mechanism that protects vulnerable households.
These protections are at best fragile when the product comes from a tech platform, with the awesome data on which these companies thrive making risks still greater. This is the asymmetry Senate Banking must confront – regardless of how it is improved, a data-privacy regime applied only to regulated financial providers leaves a lot of room for still worse economic inequality.