Your Landlord Is a Stack: Opaque Software Quietly Reordered Rental Housing

On a wet Wednesday evening in November 2025, Portland City Council took a vote that amounted to a municipal exorcism. By a margin of eight to two, with two councillors absent, the council adopted an ordinance amending the Affordable Housing Code to prohibit a very specific kind of ghost: the algorithmic rent-setting device. The ordinance, pushed to the floor by Councillor Angelita Morillo, banned the sale, licence, and use of tools that ingest landlords' competitively sensitive rental data, mix it together in a proprietary blender, and spit out monthly pricing recommendations that somehow, as if by coincidence, tend to rise. On the same day, Oregon announced a proposed seven million dollar settlement with one of the country's largest landlords over illegal rental price fixing. The message was not subtle. The algorithm, the council had decided, was not a neutral tool. It was a participant in a conspiracy that no single human had ever needed to speak aloud.
To understand why a mid-sized American city felt compelled to legislate against a piece of software as if it were a cartel, you have to reckon with an odd fact about the last decade of rental housing in North America: the landlord, increasingly, is not a person. Not in any meaningful sense. It is a stack. A portfolio held by a private equity fund, operated by a property management company, advised by a revenue management platform, screened by a background check vendor, underwritten by an income verification API, and enforced, when necessary, by an automated eviction pipeline. Somewhere in that stack there is usually a human being who signs forms. But the decisions, the ones that matter to a tenant, the rent you pay, the flat you get, the day you are handed a notice to quit, have migrated upstream into systems whose logic is proprietary, whose operators have every incentive not to explain themselves, and whose errors compound at scale.
The Portland ordinance was among the first municipal laws in North America to say, in plain terms, that this arrangement is not acceptable. It will not be the last. In March 2026, the New Jersey Senate Community and Urban Affairs Committee reported Bill S451 with amendments, and a companion bill, A3497, advanced in the Assembly. Sponsored by Senators Brian Stack and Teresa Ruiz, the New Jersey legislation takes a sharper rhetorical line than Portland's: it frames the use of profit-maximising rental algorithms as a potential violation of state antitrust law, a reclassification of software as conspiracy. Governor Mikie Sherrill has signalled that she will sign the measure if it reaches her desk, describing the practice in terms that would have sounded paranoid five years ago: “for-profit surveillance by Big Tech.” And in February 2026, OpenMedia, the Canadian digital rights organisation, published an investigation titled Watch This Space: The Rise of AI Landlords in Canada, a dispatch from a country quietly outsourcing one of its most important social relationships to a handful of opaque vendors, and discovering, too late, how few tools exist to hold them to account.
What is emerging, in other words, is a regulatory front. It is jagged, uncoordinated, and full of holes. But it is real. And it is belated.
The Hub, the Spokes, and the Polite Fiction
The antitrust case against algorithmic rent-setting software rests on a theory as old as cartels themselves. In the classical “hub-and-spoke” conspiracy, a central coordinator channels information among competitors in a way that allows them to align their behaviour without ever meeting in a smoke-filled room. The hub never explicitly tells the spokes to raise prices. The spokes never explicitly agree to. But information flows, prices move, and the market ends up acting as if there had been an agreement, because functionally there was one.
In 2024, the US Department of Justice filed a civil antitrust lawsuit alleging the hub could be software. The defendant was RealPage, a Texas-based property technology company whose YieldStar and AI Revenue Management products had come to dominate the multifamily rental market. According to the DOJ's complaint, RealPage's software ingested non-public, competitively sensitive pricing and occupancy data from participating landlords, ran it through a proprietary algorithm, and returned real-time rent recommendations. Each landlord, acting alone, could plausibly claim to be simply accepting third-party advice. Collectively, they were coordinating rents across thousands of competing properties. The software was doing what a cartel meeting used to do, only faster, more granularly, and with a user interface.
RealPage settled with the DOJ on 24 November 2025, agreeing to stop offering software that uses non-public competitively sensitive data shared among landlords. The final judgement installs an independent monitor for three years. A parallel class action in Tennessee produced preliminary settlements with twenty-six defendants in October 2025. In New Jersey, the state attorney general sued RealPage and ten of the state's largest landlords, alleging revenue management products had been used to inflate rents and eliminate competitive pricing.
Legal academics have spent two years picking apart a question the RealPage case raises but does not fully resolve: at what point does using the same pricing software as your competitors become collusion? The Ninth Circuit, in an August 2025 decision, held that mere parallel use of a common algorithm is not enough; there must be some additional factor, a “plus” in antitrust parlance, that suggests coordination beyond independent business judgement. Critics argue this standard is already obsolete. The whole point of modern revenue management software is that it dissolves the distinction between independent judgement and coordinated behaviour. When your “judgement” is literally reading off a dashboard populated with your competitors' data, there is nothing independent about it. You are just a spoke on a very well-lit wheel.
This is why the Portland ordinance and the New Jersey legislation matter beyond their jurisdictions. They are experiments in skipping past the antitrust doctrinal debate entirely. Rather than arguing, case by case, about whether a particular use of a particular algorithm constitutes a Sherman Act violation, they simply prohibit the class of tools that makes the question interesting. Hoboken passed a similar ordinance in July 2025. Jersey City became the first New Jersey municipality to ban AI-driven rent-setting software in May 2025. San Francisco, Berkeley, Philadelphia, and Minneapolis have followed variously. New York signed its own ban in 2025. The patchwork is real. So is the backlash: RealPage has sued Berkeley alleging pre-emption, and a bill in Congress would block local bans altogether, sponsored by legislators whose campaign contributions from the property technology sector are, to put it charitably, not coincidental.
The polite fiction that algorithmic pricing is a neutral productivity tool, rather than a coordinated market behaviour, is dissolving.
Screening, or the Quiet Part Out Loud
If algorithmic price-setting is the loud scandal, algorithmic tenant screening is the quieter one, and arguably the uglier. Price fixing harms everyone who pays rent. Screening algorithms harm, specifically and disproportionately, the people with the least leverage: low-income applicants, people of colour, housing voucher holders, recent immigrants, people with disability histories, and, as we will see, retired people whose wealth happens to live in the wrong kind of account.
The paradigmatic case is Louis v. SafeRent Solutions, which ended in a 2.275 million dollar settlement in late 2024. Mary Louis, a Black woman in the Boston area, applied for a flat. She had a housing voucher. She had a cosigner with high credit. She had a recommendation from a landlord who had rented to her for seventeen years. SafeRent's scoring algorithm denied her anyway. The score was a number. The number was the decision. When she asked why, nobody could tell her, because nobody knew. The algorithm's weights were proprietary. The landlord had outsourced the judgement. The scoring company had automated it. And Louis, who was very much a person with a life and a rental history and a plan, had been translated into a probability distribution that did not flatter her.
The SafeRent settlement established, in a negotiated form, that an algorithmic screening system producing disparate impact against members of a protected class creates liability under Section 3604 of the Fair Housing Act. Even if each of the algorithm's individual inputs is facially neutral, the output is what matters. This aligns with guidance issued by the United States Department of Housing and Urban Development in May 2023, which stated that housing providers and tenant screening companies that use algorithms are not absolved from liability when their practices disproportionately deny people of colour access to housing.
The problem is that guidance is not law, settlements bind only their parties, and the broader regulatory environment for tenant screening remains, in a word, rickety. In theory, the Fair Credit Reporting Act requires that when a tenant is denied housing based on a consumer report, the landlord must provide an adverse action notice naming the reporting company and explaining the right to dispute. In practice, a survey cited by the Center for Democracy and Technology found that only three per cent of renters report knowing the name of the screening company involved in their denial. Three per cent. Which means ninety-seven per cent of denied applicants have been handed a decision that was probably, under existing law, required to come with a full disclosure, and simply did not.
Reddit, predictably, is where the texture of this failure lives. On forums like r/Tenant, r/Renters, and country-specific subs in Canada and the UK, the rejection stories share a grammar. An applicant submits through a national portal. A form email arrives a day or two later, stating that the application has been declined, that the decision is final, and that the landlord cannot share additional information. No name of the scoring company. No numerical score. No reasoning. Sometimes the rejection arrives within minutes, which is the giveaway: no human has looked. The tenant, if persistent, writes to the property manager. The property manager, if they respond, redirects to the screening vendor. The vendor explains that their system is a “decision support” tool and that the landlord made the decision, which the landlord did, in the sense that they clicked accept on the output. The buck is passed, circulated, and finally mislaid.
This is not an anecdote. It is a structural feature. When every participant can point to another as the “real” decider, nobody is responsible. The algorithm is treated as advisory. The landlord is treated as the decision-maker. The vendor is treated as a mere processor. And the tenant, the one actor whose life is materially altered, is treated as an input.
The Retired Couple Problem
Nowhere is the brittleness of algorithmic screening more visible than in a class of applicant that ought to be trivially easy to approve: the asset-rich, income-light retiree. Consider a retired couple in their seventies. They have sold their family home. They have eight hundred thousand pounds in a combination of index funds, a self-invested personal pension, and ISA savings. They receive modest state pension income plus a small private pension. They want to rent a flat, possibly for the rest of their lives. Any sensible human landlord would recognise the applicants as close to ideal. They are not going to miss a payment. Their wealth is documented and liquid in the relevant sense.
An automated income verification system does not see any of this. It sees a bank account with low deposits relative to expected rent. It sees no salary. It sees a 1099-R in the United States, or a pension statement in the United Kingdom, that does not parse as “employment income” in the system's model. It sees “insufficient income to rent ratio,” flags the applicant, and issues a decline. The couple, baffled, may not even learn the reason. If they do, they face the near-impossible task of explaining, through a web form or a call centre, that their liquidity lives in instruments the software does not understand.
This failure mode is not hypothetical. OpenMedia's February 2026 investigation documented multiple instances of retirees and self-employed applicants being denied by automated screening systems that could not ingest asset-based or non-standard income. Applicants on irregular income, seasonal work, gig platforms, disability benefits, family support, or investment distributions find themselves recursively rejected by systems trained on the assumption that income is a W-2 or a PAYE payslip. The effect is to systematically disadvantage anyone whose financial life does not conform to the assumptions of a mid-twentieth-century labour market.
And the effect compounds. Because these screening systems increasingly operate on overlapping datasets, a rejection by one often produces a cascade of similar rejections elsewhere. A tenant declined by a large property management company in a major metropolitan area may find themselves effectively blacklisted across the private rented sector in their region, without ever having done anything wrong, simply because the algorithm could not parse their life. There is no central registry of these decisions. There is no equivalent of a credit bureau dispute process that reliably works for algorithmic tenant scores. There is, in many cases, not even a name to sue.
What OpenMedia Found
The February 2026 OpenMedia report captures what happens when this infrastructure is built out in a jurisdiction with weaker privacy enforcement than the European Union and more fragmented housing law than the United States. OpenMedia's researchers documented the rapid spread of AI landlord tools across the Canadian rental market, naming vendors including CERTN and Single Key. Their marketing materials promise landlords the ability to scan up to seven years of a prospective tenant's social media activity, search more than a hundred databases of personal information including eviction and criminal records, flag “risky” online behaviour, and generate automated risk scores.
The scope of surveillance on offer, even leaving aside questions of accuracy, is extraordinary. A 2018 report by the British Columbia Information and Privacy Commissioner, cited by OpenMedia, found that ten of thirteen landlords studied were already systematically over-collecting sensitive personal information in violation of the province's Personal Information Protection Act. That was before the current generation of AI screening tools existed. Canadian tenants increasingly apply through portals that ingest years of financial history, employment records, social media handles, criminal background, and in some cases access to open banking feeds that reveal every transaction for months.
The OpenMedia report also documents an aspect that rarely makes headlines: automated eviction initiation. Several Canadian AI landlord platforms now offer modules that flag tenants for non-payment risk and, in some configurations, automatically generate and file notice-to-quit paperwork once a threshold is crossed. The property manager is presented with an eviction already drafted. All they need to do is sign off. The frictionlessness is the point. And the frictionlessness is what makes it dangerous.
In the United Kingdom, similar tools are in circulation, though deployment is more fragmented. The UK's regulatory posture is, in principle, stronger: the UK GDPR, which preserves Article 22, gives tenants a right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. The Information Commissioner's Office has issued guidance that housing decisions fall squarely within Article 22's scope. In practice, enforcement has been minimal, and the “solely” in “solely automated” has become a loophole: a human clicking an approval button is often treated as meaningful human involvement, even when the human does no independent review. GDPR's promise, in the rental context, has been largely notional.
The Frameworks That Would Need to Exist
If we were to design, from scratch, a legal and ethical framework adequate to algorithmically mediated housing, it would need several overlapping layers. Not one silver bullet. Not one statute. A stack.
The first is algorithmic transparency and the right to an explanation. The model already exists, in imperfect form, in Article 22 of the GDPR and the associated Articles 13 to 15, which grant data subjects a right to meaningful information about the logic involved in automated decisions. In the housing context, any decision to reject an application, to set or raise rent, to initiate eviction, or to classify a tenant as high-risk must be accompanied by an individualised explanation that a reasonable applicant can act on. Not a generic disclaimer. Not “your score was below our threshold.” The features that drove the decision, the weights attached to them, and the means to dispute. California's AB 2930, Colorado's SB 205, and the EU AI Act's high-risk system provisions gesture in this direction. None yet fully deliver.
The second is a modernised adverse action regime. The FCRA, drafted in 1970, was not built for machine learning. Its core insight, that when a consumer is harmed by a report they must be told who provided it and how to dispute it, remains sound. What it needs is an update that treats algorithmic scoring outputs as consumer reports in their own right, regardless of whether the vendor is technically a consumer reporting agency. It needs a private right of action strong enough that violations are actually litigated. And it needs clear rules on data provenance: tenants must be able to know where every input came from, and correct ones that are wrong.
The third is antitrust reform for algorithmic coordination. The Portland ordinance and the New Jersey legislation are, in effect, saying that existing Sherman Act and state antitrust doctrine is not agile enough to handle pricing algorithms that coordinate markets without explicit agreement. They may be right. A federal statute clarifying that the provision or use of pricing software that ingests competitors' non-public data and returns pricing recommendations is per se unlawful, full stop, would resolve most of the doctrinal uncertainty the Ninth Circuit has struggled with.
The fourth is fair housing law modernisation. The Fair Housing Act's disparate impact doctrine, as reaffirmed by HUD's 2023 guidance and the SafeRent settlement, already in principle applies to algorithmic screening. What is missing is pre-deployment obligation: a requirement that any system used to screen tenants be audited for disparate impact before deployment, and periodically thereafter, with results disclosed to regulators and, in summary form, to the public. The EU AI Act has adopted something like this for high-risk systems. US federal law has not.
The fifth is data minimisation and purpose limitation. Why, precisely, does a landlord need seven years of your social media history to decide whether to rent you a flat? They do not. The reason this data is collected is that it is available and that vendors have built products around ingesting it. A defensible regime would restrict the information that can be lawfully used in a tenancy decision to a short, well-justified list: identity, credit history for a defined period, verifiable income or assets, prior eviction judgements. Everything else, social media, location history, network-of-associates data, should be off-limits.
The sixth is a human-in-the-loop mandate with teeth. The current form of “human review” is often performative: a property manager glances at a dashboard and clicks accept. A meaningful mandate would require that any adverse decision, a denial, a rent increase above a threshold, an eviction filing, involve substantive human consideration of the specific circumstances, documented in writing, by a person with authority to override the algorithm. Anything less is GDPR Article 22's “solely automated” dressed in a lab coat.
The seventh is a private right of action. Without one, regulators are the only enforcement mechanism, and regulators are outnumbered, underfunded, and subject to political winds. Tenants harmed by algorithmic decisions must be able to sue, individually or collectively, with the prospect of statutory damages that matter to the vendors' bottom line.
None of this is impossibly difficult to design. All of it is politically difficult to pass.
Why It Has Been So Slow
The lag between the emergence of algorithmic rental infrastructure and the legal response is not accidental. It is the product of several overlapping forces.
The first is lobbying. The real estate technology sector is well capitalised and well connected. RealPage alone has spent substantial sums on lobbying in Washington and in state capitals, and the National Multifamily Housing Council has been an aggressive opponent of algorithmic pricing bans. The Congressional bill that would pre-empt local bans did not arrive by accident. It arrived because the sector organised, wrote briefing papers, cultivated relationships, and showed up.
The second is jurisdictional fragmentation. Housing law in North America is, fundamentally, local. Tenancy rules vary by state, province, county, and city. Fair housing enforcement is divided between federal, state, and municipal bodies with overlapping mandates. Data protection in the United States is a patchwork of sectoral statutes and state laws rather than a comprehensive regime. In Canada, privacy law is split between federal and provincial statutes with varying scope. There is no single venue in which the problem can be addressed.
The third is the “it's just software” framing. Property technology vendors have been remarkably successful at presenting their products as neutral productivity tools, no different in principle from spreadsheets or email. This framing has allowed them to slip past regulatory scrutiny that would have attached to an equivalent human operation. A trade association that circulated pricing data among competing landlords would have been an antitrust target within months. A piece of software that did the same thing operated for over a decade before the DOJ took action.
The fourth is the pace gap between technology and law. Legislatures move slowly. Machine learning systems iterate quickly. By the time a committee has held hearings, drafted a bill, negotiated amendments, and passed a statute, the technology has moved on. In housing, where the human stakes are so direct, the gap is particularly painful.
The fifth, and perhaps most important, is that housing is politically fraught. Every proposed tenant protection collides with a coalition of property owners, developers, landlord associations, and investors who will argue, often successfully, that any new regulation will reduce housing supply and hurt the very people it aims to protect. This argument has merit in some contexts and is made in bad faith in others. The effect, either way, is to make housing reform slow, incremental, and vulnerable to reversal. Banning rent-setting software is presented by its opponents as a form of price control. That the alternative is an effectively unregulated market-wide coordination mechanism does not feature prominently in the industry's press releases.
Housing as a Human Right, and the Asymmetry
Behind the specific technical debates about algorithms and adverse action notices lies a broader ethical question that most regulatory frameworks dance around. If housing is a human right, or at least a precondition for the exercise of other rights, then who gets to live where is not a purely private matter between landlord and tenant. It is a matter in which the state has a legitimate and, arguably, obligatory interest. This framing is common in European human rights jurisprudence and in UN declarations. It is less established in North American constitutional doctrine, where housing is typically treated as a commodity subject to market allocation, moderated by discrimination law and modest subsidies.
The algorithmic turn makes the question urgent. In a market of individual landlords making individual decisions, even when those decisions are discriminatory or arbitrary, the harm is distributed and contestable. A tenant denied by one landlord can try another. The friction of the market is the friction of a thousand independent actors.
In a market mediated by a handful of algorithmic vendors, all of whom share similar data, similar models, and similar blind spots, the friction collapses. Being rejected by one system is being rejected by most. Being flagged as high-risk by one score is being flagged similarly across portfolios. The asymmetry between landlord and tenant, which was always real, becomes qualitatively different when the landlord's side of the relationship is automated, networked, and effectively infinite, while the tenant's side remains one anxious person with a phone and a rejection email.
This is the ethical heart of the matter. The algorithm does not just mediate; it structurally biases the relationship in favour of the side that deployed it. It is not a neutral tool. It is a party. And when it is a party that cannot be questioned, sued effectively, or compelled to explain, the relationship ceases to have the features we normally associate with fair dealing between legal persons. It becomes an administrative system with the power of a landlord and the accountability of a weather pattern.
What the Portland Vote Actually Represents
Which brings us back to that wet Wednesday in November. The Portland ordinance, in the grand sweep of housing policy, is small. It will affect a single city. It carries civil penalties of up to a thousand dollars per violation, which is, by the standards of the entities it regulates, loose change. Its practical impact on rents in Portland will be debated for years.
And yet. The ordinance represents something the regulatory conversation has been missing: a willingness to treat algorithmic infrastructure as a political object rather than a technical inevitability. Portland's council did not hold a symposium on whether large language models will bring about the singularity. They did not convene a blue-ribbon commission to study the metaphysics of automated decision-making. They looked at a specific piece of software, understood what it did, concluded that what it did was unlawful if done by humans, and made it unlawful when done by software. It was, in a sense, a deeply old-fashioned move. It treated the technology as conduct, and conduct as regulable.
New Jersey's March 2026 legislation extends the logic. OpenMedia's February investigation documents the scale of what has been built. HUD's guidance, the SafeRent settlement, the RealPage consent decree, and the various state and municipal ordinances form, together, the rough outline of a response. Not a coherent framework, not yet. But the scaffolding of one.
The question is whether the federal government, or a coalition of states, can assemble the pieces into something that actually protects tenants. The reasons to persist are the stories that show up, daily, in tenant forums and courtrooms and outside the flats that retirees have been refused because a computer could not count their pension.
A decade from now, we will either look back at 2025 and 2026 as the moment the law began to catch up with the infrastructure of rental housing, or as the moment a handful of cities shouted into a wind that kept blowing. Portland voted eight to two. New Jersey's Senate committee voted to advance. OpenMedia published. The DOJ sued and settled. The work, as ever, is in the follow-through. The algorithm is not going to regulate itself. It does not notice whether the person it just declined is Mary Louis, or a retired couple, or someone whose only crime was having the wrong kind of income in the wrong kind of account.
That work, still, belongs to us.
References and Sources
- City of Portland. “Amend Affordable Housing Code to add prohibition of anti-competitive rental practices including the sale and use of algorithmic devices.” November 2025. portland.gov/council/documents/ordinance/algorithmic-rental-pricing
- Portland Mercury. “Portland City Council Votes to Adopt AI Rental Price-Fixing Software Ban.” 19 November 2025.
- US Department of Justice. “Justice Department Requires RealPage to End the Sharing of Competitively Sensitive Information.” 24 November 2025.
- US Department of Justice. “Justice Department Sues RealPage for Algorithmic Pricing Scheme That Harms Millions of American Renters.” 2024.
- Reed Smith. “Algorithmic pricing under pressure: DOJ's RealPage settlement changes the rules for rental markets.” 2025.
- New Jersey State Policy Lab, Rutgers University. “Senate Committee Advances Ban on Rent-Setting Algorithms.” 9 March 2026.
- Multifamily Dive. “New Jersey mulls algorithmic rent pricing ban.” 2026.
- City of Hoboken. “City of Hoboken to introduce ordinance prohibiting algorithmic rent-fixing.” 2025.
- Insider NJ. “Jersey City Council Advances Ordinances that Ban AI-Powered Rent-Fixing Algorithms.” 2025.
- Arnold & Porter. “Algorithmic Pricing Bans Go Coast to Coast.” October 2025.
- OpenMedia. “Watch This Space: The Rise of AI Landlords in Canada.” February 2026.
- American Bar Association, Human Rights Magazine. “Ghosts in the Machine: How Past and Present Biases Haunt Algorithmic Tenant Screening Systems.” June 2024.
- Center for Democracy and Technology. “Tenant Screening Algorithms Enable Racial and Disability Discrimination at Scale.”
- Consumer Financial Protection Bureau. “Consumer Snapshot: Tenant Background Checks.” November 2022.
- The Leadership Conference on Civil and Human Rights. “AI + Tenant Screening.”
- TechEquity Collaborative. “Traditional vs. algorithmic tenant screening.” 10 July 2024.
- Georgetown Law Poverty Journal. “The Discriminatory Impacts of AI-Powered Tenant Screening Programs.”
- American University Business Law Review. “Screened Out: How Faulty Algorithms Are Shutting Doors on Fair Housing.” September 2024.
- California Law Review. “A Home for Digital Equity: Algorithmic Redlining and Property Technology.”
- Information Commissioner's Office (UK). “What is the impact of Article 22 of the UK GDPR on fairness?”
- General Data Protection Regulation. “Article 22: Automated individual decision-making, including profiling.” gdpr-info.eu/art-22-gdpr
- Gothamist. “Hochul signs bill banning NY landlords from using algorithm software to set rents.” 2025.
- Freshfields. “Settling Defendants to Pay over $141 million to Settle RealPage Price-Fixing Class Action Claims in Tennessee.” October 2025.
- Arnold & Porter. “Ninth Circuit Clarifies Antitrust Implications of Algorithmic Pricing.” August 2025.
- HUD and DOJ joint guidance on algorithm-based tenant screening. May 2023.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
Listen to the free weekly SmarterArticles Podcast