Seeing Everything, Knowing Nothing: How Privacy Tech Reads Trends

Every day, billions of people tap, swipe, and type their lives into digital platforms. Their messages reveal emerging slang before dictionaries catch on. Their search patterns signal health crises before hospitals fill up. Their collective behaviours trace economic shifts before economists can publish papers. This treasure trove of human insight sits tantalisingly close to platform operators, yet increasingly out of legal reach. The question haunting every major technology company in 2026 is deceptively simple: how do you extract meaning from user content without actually seeing it?
The answer lies in a fascinating collection of mathematical techniques collectively known as privacy-enhancing technologies, or PETs. These are not merely compliance tools designed to keep regulators happy. They represent a fundamental reimagining of what data analysis can look like in an age where privacy has become both a legal requirement and a competitive differentiator. The global privacy-enhancing technologies market, valued at approximately USD 3.17 billion in 2024, is projected to explode to USD 28.4 billion by 2034, growing at a compound annual growth rate of 24.5 percent. That growth trajectory tells a story about where the technology industry believes the future lies.
This article examines the major privacy-enhancing technologies available for conducting trend analysis on user content, explores the operational and policy changes required to integrate them into analytics pipelines, and addresses the critical question of how to validate privacy guarantees in production environments.
The Privacy Paradox at Scale
Modern platforms face an uncomfortable tension that grows more acute with each passing year. On one side sits the undeniable value of understanding user behaviour at scale. Knowing which topics trend, which concerns emerge, and which patterns repeat allows platforms to improve services, detect abuse, and generate the insights that advertisers desperately want. On the other side sits an increasingly formidable wall of privacy regulations, user expectations, and genuine ethical concerns about surveillance capitalism.
The regulatory landscape has fundamentally shifted in ways that would have seemed unthinkable a decade ago. The General Data Protection Regulation (GDPR) in the European Union can impose fines of up to four percent of global annual revenue or twenty million euros, whichever is higher. Since 2018, GDPR enforcement has resulted in 2,248 fines totalling almost 6.6 billion euros, with the largest single fine being Meta's 1.2 billion euro penalty in May 2023 for transferring European user data to the United States without adequate legal basis. The California Consumer Privacy Act and its successor, the California Privacy Rights Act, apply to for-profit businesses with annual gross revenue exceeding USD 26.625 million, or those handling personal information of 100,000 or more consumers. By 2025, over twenty US states have enacted comprehensive privacy laws with requirements similar to GDPR and CCPA.
The consequences of non-compliance extend far beyond financial penalties. Companies face reputational damage that can erode customer trust for years. The 2024 IBM Cost of a Data Breach Report reveals that the global average data breach cost has reached USD 4.88 million, representing a ten percent increase from the previous year. This figure encompasses not just regulatory fines but also customer churn, remediation costs, and lost business opportunities. Healthcare organisations face even steeper costs, with breaches in that sector averaging USD 10.93 million, the highest of any industry for the fourteenth consecutive year.
Traditional approaches to this problem treated privacy as an afterthought. Organisations would collect everything, store everything, analyse everything, and then attempt to bolt on privacy protections through access controls and anonymisation. This approach has proven inadequate. Researchers have repeatedly demonstrated that supposedly anonymised datasets can be re-identified by combining them with external information. A landmark 2006 study showed that 87 percent of Americans could be uniquely identified using just their date of birth, gender, and ZIP code. The traditional model of collect first, protect later is failing, and the industry knows it.
Differential Privacy Comes of Age
In 2006, Cynthia Dwork, working alongside Frank McSherry, Kobbi Nissim, and Adam Smith, published a paper that would fundamentally reshape how we think about data privacy. Their work, titled “Calibrating Noise to Sensitivity in Private Data Analysis,” introduced the mathematical framework of differential privacy. Rather than trying to hide individual records through anonymisation, differential privacy works by adding carefully calibrated statistical noise to query results. The noise is calculated in a way that makes it mathematically impossible to determine whether any individual's data was included in the dataset, while still allowing accurate aggregate statistics to emerge from sufficiently large datasets.
The beauty of differential privacy lies in its mathematical rigour. The framework introduces two key parameters: epsilon and delta. Epsilon represents the “privacy budget” and quantifies the maximum amount of information that can be learned about any individual from the output of a privacy-preserving algorithm. A smaller epsilon provides stronger privacy guarantees but typically results in less accurate outputs. Delta represents the probability that the privacy guarantee might fail. Together, these parameters allow organisations to make precise, quantifiable claims about the privacy protections they offer.
In practice, epsilon values often range from 0.1 to 1 for strong privacy guarantees, though specific applications may use higher values when utility requirements demand it. The cumulative nature of privacy budgets means that each query against a dataset consumes some of the available privacy budget. Eventually, repeated queries exhaust the budget, requiring either a new dataset or acceptance of diminished privacy guarantees. This constraint forces organisations to think carefully about which analyses truly matter.
Major technology companies have embraced differential privacy with varying degrees of enthusiasm and transparency. Apple has been a pioneer in implementing local differential privacy across iOS and macOS. The company uses the technique for QuickType suggestions (with an epsilon of 16) and emoji suggestions (with an epsilon of 4). Apple also uses differential privacy to learn iconic scenes and improve key photo selection for the Memories and Places iOS apps.
Google's differential privacy implementations span Chrome, YouTube, and Maps, analysing user activity to improve experiences without linking noisy data with identifying information. The company has made its differential privacy library open source and partnered with Tumult Labs to bring differential privacy to BigQuery. This technology powers the Ads Data Hub and enabled the COVID-19 Community Mobility Reports that provided valuable pandemic insights while protecting individual privacy. Google's early implementations date back to 2014 with RAPPOR for collecting statistics about unwanted software.
Microsoft applies differential privacy in its Assistive AI with an epsilon of 4. This epsilon value has become a policy standard across Microsoft use cases for differentially private machine learning, applying to each user's data over a period of six months. Microsoft also uses differential privacy for collecting telemetry data from Windows devices.
The most ambitious application of differential privacy came from the United States Census Bureau for the 2020 Census. This marked the first time any federal government statistical agency applied differential privacy at such a scale. The Census Bureau established accuracy targets ensuring that the largest racial or ethnic group in any geographic entity with a population of 500 or more persons would be accurate within five percentage points of their enumerated value at least 95 percent of the time. Unlike previous disclosure avoidance methods such as data swapping, the differential privacy approach allows the Census Bureau to be fully transparent about its methodology, with programming code and settings publicly available.
Federated Learning and the Data That Never Leaves
If differential privacy protects data by adding noise, federated learning protects data by ensuring it never travels in the first place. This architectural approach to privacy trains machine learning models directly on user devices at the network's edge, eliminating the need to upload raw data to the cloud entirely. Users train local models on their own data and contribute only the resulting model updates, called gradients, to a central server. These updates are aggregated to create a global model that benefits from everyone's data without anyone's data ever leaving their device.
The concept aligns naturally with data minimisation principles enshrined in regulations like GDPR. By design, federated learning structurally embodies the practice of collecting only what is necessary. Major technology companies including Google, Apple, and Meta have adopted federated learning in applications ranging from keyboard prediction (Gboard) to voice assistants (Siri) to AI assistants on social platforms.
Beyond machine learning, the same principles apply to analytics through what Google calls Federated Analytics. This approach supports basic data science needs such as counts, averages, histograms, quantiles, and other SQL-like queries, all computed locally on devices and aggregated without centralised data collection. Analysts can learn aggregate model metrics, popular trends and activities, or geospatial location heatmaps without ever seeing individual user data.
The technical foundations have matured considerably. TensorFlow Federated is Google's open source framework designed specifically for federated learning research and applications. PyTorch has also become increasingly popular for federated learning through extensions and specialised libraries. These tools make the technology accessible to organisations beyond the largest technology companies.
An interesting collaboration emerged from the pandemic response. Apple and Google's Exposure Notification framework includes an analytics component that uses distributed differential privacy with a local epsilon of 8. This demonstrates how federated approaches can be combined with differential privacy for enhanced protection.
However, federated learning presents its own challenges. The requirements of privacy and security in federated learning are inherently conflicting. Privacy necessitates the concealment of individual client updates, while security requires some disclosure of client updates to detect anomalies like adversarial attacks. Research gaps remain in handling non-identical data distributions across devices and defending against attacks.
Homomorphic Encryption and Computing on Secrets
Homomorphic encryption represents what cryptographers sometimes call the “holy grail” of encryption: the ability to perform computations on encrypted data without ever decrypting it. The results of these encrypted computations, when decrypted, match what would have been obtained by performing the same operations on the plaintext data. This means sensitive data can be processed, analysed, and transformed while remaining encrypted throughout the entire computation pipeline.
As of 2024, homomorphic encryption has moved beyond theoretical speculation into practical application. Privacy technologies have advanced greatly and become not just academic or of theoretical interest but ready to be applied and increasingly practical. The technology particularly shines in scenarios requiring secure collaboration across organisational boundaries where trust is limited.
In healthcare, comprehensive frameworks now enable researchers to conduct collaborative statistical analysis on health records while preserving privacy and ensuring security. These frameworks integrate privacy-preserving techniques including secret sharing, secure multiparty computation, and homomorphic encryption. The ability to analyse encrypted medical data has applications in drug development, where multiple parties need to use datasets without compromising patient confidentiality.
Financial institutions leverage homomorphic encryption for fraud detection across institutions without exposing customer data. Banks can collaborate on anti-money laundering efforts without revealing their customer relationships.
The VERITAS library, presented at the 2024 ACM Conference on Computer and Communications Security, became the first library supporting verification of any homomorphic operation, demonstrating practicality for various applications with less than three times computation overhead compared to the baseline.
Despite these advances, significant limitations remain. Encryption introduces substantial computational overhead due to the complexity of performing operations on encrypted data. Slow processing speeds make fully homomorphic encryption impractical for real-time applications, and specialised knowledge is required to effectively deploy these solutions.
Secure Multi-Party Computation and Collaborative Secrets
Secure multi-party computation, or MPC, takes a different approach to the same fundamental problem. Rather than computing on encrypted data, MPC enables multiple parties to jointly compute a function over their inputs while keeping those inputs completely private from each other. Each party contributes their data but never sees anyone else's contribution, yet together they can perform meaningful analysis that would be impossible if each party worked in isolation.
The technology has found compelling real-world applications that demonstrate its practical value. The Boston Women's Workforce Council has used secure MPC to measure gender and racial wage gaps in the greater Boston area. Participating organisations contribute their payroll data through the MPC protocol, allowing analysis of aggregated data for wage gaps by gender, race, job category, tenure, and ethnicity without revealing anyone's actual wage.
The global secure multiparty computation market was estimated at USD 794.1 million in 2023 and is projected to grow at a compound annual growth rate of 11.8 percent from 2024 to 2030. In June 2024, Pyte, a secure computation platform, announced additional funding bringing its total capital to over USD 12 million, with patented MPC technology enabling enterprises to securely collaborate on sensitive data.
Recent research has demonstrated the feasibility of increasingly complex MPC applications. The academic conference TPMPC 2024, hosted by TU Darmstadt's ENCRYPTO group, showcased research proving that complex tasks like secure inference with Large Language Models are now feasible with today's hardware. A paper titled “Sigma: Secure GPT Inference with Function Secret Sharing” showed that running inference operations on an encrypted 13 billion parameter model achieves inference times of a few seconds per token.
Partisia has partnered with entities in Denmark, Colombia, and the United States to apply MPC in healthcare analytics and cross-border data exchange. QueryShield, presented at the 2024 International Conference on Management of Data, supports relational analytics with provable privacy guarantees using MPC.
Synthetic Data and the Privacy of the Artificial
While the previous technologies focus on protecting real data during analysis, synthetic data generation takes a fundamentally different approach. Rather than protecting real data through encryption or noise, it creates entirely artificial datasets that maintain the statistical properties and patterns of original data without containing any actual sensitive information. By 2024, synthetic data has established itself as an essential component in AI and analytics, with estimates indicating 60 percent of projects now incorporate synthetic elements. The market has expanded from USD 0.29 billion in 2023 toward projected figures of USD 3.79 billion by 2032, representing a 33 percent compound annual growth rate.
Modern synthetic data creation relies on sophisticated approaches including Generative Adversarial Networks and Variational Autoencoders. These neural network architectures learn the underlying distribution of real data and generate new samples that follow the same patterns without copying any actual records. The US Department of Homeland Security Science and Technology Directorate awarded contracts in October 2024 to four startups to develop privacy-enhancing synthetic data generation capabilities.
Several platforms have emerged as leaders in this space. MOSTLY AI, based in Vienna, uses its generative AI platform to create highly accurate and private tabular synthetic data. Rockfish Data, based on foundational research at Carnegie Mellon University, developed a high-fidelity privacy-preserving platform. Hazy specialises in privacy-preserving synthetic data for regulated industries and is now part of SAS Data Maker.
Research published in Scientific Reports demonstrated that synthetic data can maintain similar utility (predictive performance) as real data while preserving privacy, supporting compliance with GDPR and HIPAA.
However, any method to generate synthetic data faces an inherent tension. The goals of imitating the statistical distributions in real data and ensuring privacy are sometimes in conflict, leading to a trade-off between usefulness and privacy.
Trusted Execution Environments and Hardware Sanctuaries
Moving from purely mathematical solutions to hardware-based protection, trusted execution environments, or TEEs, take yet another approach to privacy-preserving computation. Rather than mathematical techniques, TEEs rely on hardware features that create secure, isolated areas within a processor where code and data are protected from the rest of the system, including privileged software like the operating system or hypervisor.
A TEE acts as a black box for computation. Input and output can be known, but the state inside the TEE is never revealed. Data is only decrypted while being processed within the CPU package and automatically encrypted once it leaves the processor, making it inaccessible even to the system administrator.
Two main approaches have emerged in the industry. Intel's Software Guard Extensions (SGX) pioneered process-based TEE protection, dividing applications into trusted and untrusted components with the trusted portion residing in encrypted memory. AMD's Secure Encrypted Virtualisation (SEV) later brought a paradigm shift with VM-based TEE protection, enabling “lift-and-shift” deployment of legacy applications. Intel has more recently implemented this paradigm in Trust Domain Extensions (TDX).
A 2024 research paper published in ScienceDirect provides comparative evaluation of TDX, SEV, and SGX implementations. The power of TEEs lies in their ability to perform computations on unencrypted data (significantly faster than homomorphic encryption) while providing robust security guarantees.
Major cloud providers have embraced TEE technology. Azure Confidential VMs run virtual machines with AMD SEV where even Microsoft cannot access customer data. Google Confidential GKE offers Kubernetes clusters with encrypted node memory.
Zero-Knowledge Proofs and Proving Without Revealing
Zero-knowledge proofs represent a revolutionary advance in computational integrity and privacy technology. They enable the secure and private exchange of information without revealing underlying private data. A prover can convince a verifier that a statement is true without disclosing any information beyond the validity of the statement itself.
In the context of data analytics, zero-knowledge proofs allow organisations to prove properties about their data without exposing the data. Companies like Inpher leverage zero-knowledge proofs to enhance the privacy and security of machine learning solutions, ensuring sensitive data used in training remains confidential while still allowing verification of model properties.
Zero-Knowledge Machine Learning (ZKML) integrates machine learning with zero-knowledge testing. The paper “zkLLM: Zero Knowledge Proofs for Large Language Models” addresses a challenge within AI legislation: establishing authenticity of outputs generated by Large Language Models without compromising the underlying training data. This intersection of cryptographic proofs and neural networks represents one of the most promising frontiers in privacy-preserving AI.
The practical applications extend beyond theoretical interest. Financial institutions can prove solvency without revealing individual account balances. Healthcare researchers can demonstrate that their models were trained on properly consented data without exposing patient records. Regulatory auditors can verify compliance without accessing sensitive business information. Each use case shares the same underlying principle: proving a claim's truth without revealing the evidence supporting it.
Key benefits include data privacy (computations on sensitive data without exposure), model protection (safeguarding intellectual property while allowing verification), trust and transparency (enabling auditable AI systems), and collaborative innovation across organisational boundaries. Challenges hindering widespread adoption include substantial computing power requirements for generating and verifying proofs, interoperability difficulties between different implementations, and the steep learning curve for development teams unfamiliar with cryptographic concepts.
Operational Integration of Privacy-Enhancing Technologies
Deploying privacy-enhancing technologies requires more than selecting the right mathematical technique. It demands fundamental changes to how organisations structure their analytics pipelines and governance processes. Gartner predicts that by 2025, 60 percent of large organisations will use at least one privacy-enhancing computation technique in analytics, business intelligence, or cloud computing. Reaching this milestone requires overcoming significant operational challenges.
PETs typically must integrate with additional security and data tools, including identity and access management solutions, data preparation tooling, and key management technologies. These integrations introduce overheads that should be assessed early in the decision-making process. Organisations should evaluate the adaptability of their chosen PETs, as scope creep and requirement changes are common in dynamic environments. Late changes in homomorphic encryption and secure multi-party computation implementations can negatively impact time and cost.
Performance considerations vary significantly across technologies. Homomorphic encryption is typically considerably slower than plaintext operations, making it unsuitable for latency-sensitive applications. Differential privacy may degrade accuracy for small sample sizes. Federated learning introduces communication overhead between devices and servers. Organisations must match technology choices to their specific use cases and performance requirements.
Implementing PETs requires in-depth technical expertise. Specialised skills such as cryptography expertise can be hard to find, often making in-house development of PET solutions challenging. The complexity extends to procurement processes, necessitating collaboration between data governance, legal, and IT teams.
Policy changes accompany technical implementation. Organisations must establish clear governance frameworks that define who can access which analyses, how privacy budgets are allocated and tracked, and what audit trails must be maintained. Data retention policies need updating to reflect the new paradigm where raw data may never be centrally collected.
The Centre for Data Ethics and Innovation categorises PETs into traditional approaches (encryption in transit, encryption at rest, and de-identification techniques) and emerging approaches (homomorphic encryption, trusted execution environments, multiparty computation, differential privacy, and federated analytics). Effective privacy strategies often layer multiple techniques together.
Validating Privacy Guarantees in Production
Theoretical privacy guarantees must be validated in practice. Small bugs in privacy-preserving software can easily compromise desired protections. Production tools should carefully implement primitives, following best practices in secure software design such as modular design, systematic code reviews, comprehensive test coverage, regular audits, and effective vulnerability management.
Privacy auditing has emerged as an important research area supporting the design and validation of privacy-preserving mechanisms. Empirical auditing techniques establish practical lower bounds on privacy leakage, complementing the theoretical upper bounds provided by differential privacy.
Canary-based auditing tests privacy guarantees by introducing specially designed examples, known as canaries, into datasets. Auditors then test whether these canaries can be detected in model outputs. Research on privacy attacks for auditing spans five main categories: membership inference attacks, data-poisoning attacks, model inversion attacks, model extraction attacks, and property inference.
A paper appearing at NeurIPS 2024 on nearly tight black-box auditing of differentially private machine learning demonstrates that rigorous auditing can detect bugs and identify privacy violations in real-world implementations. However, the main limitation is computational cost. Black-box auditing typically requires training hundreds of models to empirically estimate error rates with good accuracy and confidence.
Continuous monitoring addresses scenarios where data processing mechanisms require regular privacy validation. The National Institute of Standards and Technology (NIST) has developed draft guidance on evaluating differential privacy protections, fulfilling a task under the Executive Order on AI. The NIST framework introduces a differential privacy pyramid where the ability for each component to protect privacy depends on the components below it.
DP-SGD (Differentially Private Stochastic Gradient Descent) is increasingly deployed in production systems and supported in open source libraries like Opacus, TensorFlow, and JAX. These libraries implement auditing and monitoring capabilities that help organisations validate their privacy guarantees in practice.
Selecting the Right Technology for Specific Use Cases
With multiple privacy-enhancing technologies available, organisations face the challenge of selecting the right approach for their specific needs. The choice depends on several factors: the nature of the data, the types of analysis required, the computational resources available, the expertise of the team, and the regulatory environment.
Differential privacy excels when organisations need aggregate statistics from large datasets and can tolerate some accuracy loss. It provides mathematically provable guarantees and has mature implementations from major technology companies. However, it struggles with small sample sizes where noise can overwhelm the signal.
Federated learning suits scenarios where data naturally resides on distributed devices and where organisations want to train models without centralising data. It works well for mobile applications, IoT deployments, and collaborative learning across institutions.
Homomorphic encryption offers the strongest theoretical guarantees by keeping data encrypted throughout computation, making it attractive for highly sensitive data. The significant computational overhead limits its applicability to scenarios where privacy requirements outweigh performance needs.
Secure multi-party computation enables collaboration between parties who do not trust each other, making it ideal for competitive analysis, industry-wide fraud detection, and cross-border data processing.
Synthetic data provides the most flexibility after generation, as synthetic datasets can be shared and analysed using standard tools without ongoing privacy overhead.
Trusted execution environments offer performance advantages over purely cryptographic approaches while still providing hardware-backed isolation.
Many practical deployments combine multiple technologies. Federated learning often incorporates differential privacy for additional protection of aggregated updates. The most robust privacy strategies layer complementary protections rather than relying on any single technology.
Looking Beyond the Technological Horizon
The market for privacy-enhancing technologies is expected to mature with improved standardisation and integration, creating new opportunities in privacy-preserving data analytics and AI. The outlook is positive, with PETs becoming foundational to secure digital transformation globally.
However, PETs are not a silver bullet nor a standalone solution. Their use comes with significant risks and limitations ranging from potential data leakage to high computational costs. They cannot substitute existing laws and regulations but rather complement these in helping implement privacy protection principles. Ethically implementing PETs is essential. These technologies must be designed and deployed to protect marginalised groups and avoid practices that may appear privacy-preserving but actually exploit sensitive data or undermine privacy.
The fundamental insight driving this entire field is that privacy and utility are not necessarily zero-sum. Through careful application of mathematics, cryptography, and system design, organisations can extract meaningful insights from user content while enforcing strict privacy guarantees. The technologies are maturing. The regulatory pressure is mounting. The market is growing. The question is no longer whether platforms will adopt privacy-enhancing technologies for their analytics, but which combination of techniques will provide the best balance of utility and risk mitigation for their specific use cases.
What is clear is that the era of collecting everything and figuring out privacy later has ended. The future belongs to those who can see everything while knowing nothing.
References & Sources
- ISACA White Paper 2024: Exploring Practical Considerations and Applications for Privacy Enhancing Technologies
- Future of Privacy Forum: Privacy Enhancing Technologies
- Grand View Research: Privacy Enhancing Technologies Market Size Report 2030
- OECD: Emerging Privacy-Enhancing Technologies
- Federal Trade Commission: Keeping Your Privacy Enhancing Technology (PET) Promises
- Google Cloud Blog: Introducing BigQuery Differential Privacy with Tumult Labs
- Real-World Uses of Differential Privacy by Ted (desfontain.es)
- Dwork, C. Differential Privacy, ICALP 2006
- Harvard Edmond J. Safra Center for Ethics: Cynthia Dwork Profile
- ACM Queue: Federated Learning and Privacy
- Google PAIR: How Federated Learning Protects Privacy
- Nature Scientific Reports: Synthetic Data for Privacy-Preserving Clinical Risk Prediction
- US Census Bureau: Why the Census Bureau Chose Differential Privacy
- US Census Bureau: Understanding Differential Privacy
- NIST: Draft Guidance on Evaluating Privacy Protection Technique for the AI Era
- MDPI Applied Sciences: Privacy Auditing in Differential Private Machine Learning
- Harvard Data Science Review: Advancing Differential Privacy
- Grand View Research: Secure Multiparty Computation Market Size Report 2030
- ACM Queue: Multiparty Computation – To Secure Privacy Do the Math
- Roseman Labs: The Future of Secure Multi-Party Computation TPMPC 2024
- ACM CCS 2024: VERITAS Plaintext Encoders for Practical Verifiable Homomorphic Encryption
- SAP News: Fully Homomorphic Encryption Data Insights Without Sharing Data
- DHS Science and Technology: Synthetic Data Generator
- Datacebo: Synthetic Data in 2024 The Year in Review
- ScienceDirect: An Experimental Evaluation of TEE Technology
- Microsoft Learn: Trusted Execution Environment
- arXiv: A Survey on the Applications of Zero-Knowledge Proofs
- Cloud Security Alliance: Leveraging Zero-Knowledge Proofs in Machine Learning
- Usercentrics: Global Data Privacy Laws 2025 Guide
- Gartner: Top Five Trends in Privacy Through 2024

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk