Schofield‘s Laws of Computing: Timeless Principles for Developers

As software engineers, working with data is our lifeblood. We spend our days knee-deep in databases and data flows, architecting systems to process, transform, and serve information at scale. But in the daily grind of code sprints and shipping features, it can be all too easy to lose sight of the fundamentals—the core philosophies that should underpin any effort to create responsible, resilient software.

Enter Jack Schofield, the late British tech journalist who dedicated over thirty years to covering the computing industry for The Guardian. Across his prolific career as a writer, editor, and blogger, Schofield developed a series of maxims that came to be known as "Schofield‘s Laws of Computing":

  1. Never put data into a program unless you can see exactly how to get it out.
  2. Data doesn‘t really exist unless you have at least two copies of it.
  3. The easier it is for you to access your data, the easier it is for someone else to access your data.

While deceptively pithy, these principles get at some essential truths that every developer should internalize. Because although the technology landscape has shifted dramatically since Schofield first put them forth in the early 2000s, his laws are arguably more relevant than ever in the cloud-native, data-saturated, privacy-focused world we now inhabit.

Let‘s dive into each of Schofield‘s Laws in depth and examine why they should be core considerations in any modern software project. Along the way, we‘ll explore real-world examples, best practices, and cautionary tales that underscore their enduring importance.

Schofield‘s First Law: The Data Portability Imperative

"Never put data into a program unless you can see exactly how to get it out." — Jack Schofield, 2003

Schofield‘s first law is a call to arms against vendor lock-in—the all-too-common scenario where committing to a software platform, intentionally or unintentionally, leaves you at the mercy of the provider‘s whims. Attracted by a slick UI or a generous free tier, you pour resources into adopting a solution, only to later run into deal-breaking issues like:

  • Changes in terms of service that cross ethical lines or create liabilities
  • Acquisition by another company that takes the product in an undesirable direction
  • Price hikes, feature limitations, or shifts to a less favorable business model
  • Abrupt shutdowns or abandonment that orphan your data

The graveyard of abandoned platforms is littered with examples. Consider Google Photos, which offered free unlimited high-quality photo storage for over five years before abruptly terminating the policy in 2020, leaving users scrambling to find alternatives or pony up for a paid plan [1].

Or Uber and Lyft, which notoriously make it difficult for users to extract a complete archive of their ride history—data that could be invaluable for business expense tracking [2]. Even in sensitive domains like healthcare, patients are often shocked to find that they can‘t easily obtain their full medical record from providers, putting them at risk in emergency situations [3].

The lesson is clear: evaluating a platform‘s data portability should be a central consideration in any purchasing or adoption decision. As developers and technology leaders, we have a responsibility to ensure that our organizations always have a clear and easy path to liberating data if needed. Some key questions to ask:

  • Does the software provide robust export tools for all your data and content?
  • Are exports available in non-proprietary, widely compatible formats?
  • Will the data remain portable as you scale up use of the platform?
  • How have the provider and product responded to past requests for data access?

By making data portability a dealbreaker, you can save yourself from some nasty vendor lock-in headaches down the road. It‘s heartening to see the software industry starting to embrace data liberation as a core value, with major players like Google, Twitter, and Facebook now providing self-serve data export tools.

On the regulatory front, the European Union‘s trailblazing General Data Protection Regulation (GDPR) has also done much to advance portability by enshrining it as a fundamental user right. Under Article 20, data subjects can demand that controllers provide their personal data in a "structured, commonly used, and machine-readable format" and transmit it to another controller [4].

While not all organizations are subject to GDPR, adopting its spirit and prioritizing seamless data portability is simply good practice. So the next time you‘re evaluating a new tool, remember Schofield‘s first law and repeat the mantra: "Export early, export often!"

Schofield‘s Second Law: The Power of Redundancy

"Data doesn‘t really exist unless you have at least two copies of it." — Jack Schofield, 2008

Schofield‘s second law underscores the paramount importance of backups in an unpredictable world. If you only have one copy of your data, you‘re essentially gambling that nothing will ever go wrong. But as developers well know, something always goes wrong—usually at the worst possible time.

Consider some of the catastrophes that could suddenly vaporize your lone data copy:

  • Hardware failures due to age, defects, or physical damage
  • Device loss or theft while traveling or commuting
  • File system corruption that renders drives unreadable
  • Malware or ransomware that wipes or encrypts data
  • Natural disasters that destroy on-premise infrastructure

Even user error, like an accidental deletion or overwrite, can mean game over for your data. That‘s why many experts espouse the 3-2-1 backup rule: maintain at least three copies of your data, on two different storage media, with one copy located offsite [5].

The numbers bear out the urgency of this advice. According to a 2020 survey by Acronis, 50% of organizations have experienced data loss resulting in downtime, with system crashes, human error, and cyber attacks among the leading culprits [6]. The costs of these incidents can be staggering, with downtime alone estimated to drain $300,000 per hour for 91% of enterprises [7].

Even tech giants aren‘t immune, as evidenced by the 2017 GitLab incident where the company lost over 300 GB of user data due to an errant server wipe. The accident was compounded by the failure of five backup mechanisms that proved inadequate [8].

Cloud storage has done much to simplify and streamline backups, but it‘s not a security panacea. Outages do happen, and depending on your provider and plan, your data may not be automatically protected. A 2021 fire at French cloud provider OVHcloud destroyed multiple data centers and customer servers, permanently deleting data for those without offsite backups [9].

The takeaways for developers and organizations are clear:

  • Implement a 3-2-1 (or more) backup strategy for all critical data assets
  • Automate backups to run at regular intervals based on your recovery point objective
  • Verify and test backups frequently to ensure they‘re working as expected
  • Utilize multiple physical locations and vendors to mitigate risk of catastrophic events
  • Encrypt backups and secure them against modification, especially in cloud environments
  • Define clear roles and procedures for backup administration and emergency restores
  • Err on the side of more redundancy versus less for truly mission-critical information

While no backup plan is bulletproof, aligning with Schofield‘s second law and prioritizing redundancy can make the difference between a close call and a company-ending data loss event. Don‘t let your data become a statistic!

Schofield‘s Third Law: Balancing Security and Accessibility

"The easier it is for you to access your data, the easier it is for someone else to access your data." — Jack Schofield, 2008

Schofield‘s third and final law encapsulates a core conflict in modern computing: the tradeoff between data accessibility and data security. In a world of smartphones, tablets, and smart assistants, we‘ve grown accustomed to having our data at our fingertips anytime, anywhere. But this convenience comes with a dark side—by making our information so accessible to ourselves, we may be inadvertently making it all too accessible to malicious actors.

This tension has fueled an epidemic of data breaches and leaks, as organizations struggle to balance user demands for seamless data access with the realities of an increasingly sophisticated threat landscape. In their 2021 Data Risk Report, data security firm Varonis found that the average company leaves 20% of its folders open to every employee, a figure that jumps to 35% for companies with over 1TB of data [10]. It‘s a hacker‘s dream and a CISO‘s nightmare.

To make matters worse, many users compound the problem with poor security hygiene. Weak and reused passwords are rampant, with a 2019 Google study finding that 65% of people use the same password across multiple accounts [11]. Hackers exploit these tendencies through techniques like:

  • Credential stuffing – Using known username/password pairs from data breaches to gain unauthorized access
  • Dictionary attacks – Guessing weak passwords based on common words and phrases
  • Brute force attacks – Systematically trying all possible character combinations to crack passwords

Developers aren‘t immune either—a 2019 report from North Carolina State University found that 83% of GitHub repositories contained at least one secret (e.g. passwords, API keys), many of which could be easily extracted by attackers [12].

So how do you strike a balance between data accessibility and security? There‘s no silver bullet, but some best practices include:

  • Enforcing strong, unique passwords across all systems and accounts
  • Implementing multi-factor authentication (MFA) wherever possible
  • Encrypting sensitive data both at rest and in transit using industry standard algorithms
  • Tightly controlling and auditing permissions following the principle of least privilege
  • Regularly reviewing and rotating secrets like API keys and database credentials
  • Leveraging tools like password managers and secret vaults to safely store and share logins
  • Educating teams on security best practices and common attack vectors
  • Instituting a robust and responsive security incident plan to mitigate breaches
  • Proactively deleting data that‘s no longer needed to limit exposure

At the end of the day, perfect security is a myth. Determined attackers will find vulnerabilities to exploit and mistakes will inevitably occur. But by keeping Schofield‘s third law top of mind and baking security into every layer of your application and organization, you can dramatically reduce your risk profile. It‘s not about eliminating friction entirely, but rather introducing just enough to make attacks infeasible.

Schofield‘s Legacy: Towards Ethical Software

It‘s been over a decade since Jack Schofield originally proposed his computing laws, but their wisdom has only grown more relevant with time. As software eats the world and data becomes the defining asset of the 21st century, reflecting deeply on how we handle information at scale is more crucial than ever.

Schofield‘s principles of portability, redundancy, and security might seem like basic blocking and tackling for developers. But all too often, in the mad dash to ship and scale, these fundamentals fall by the wayside. The result is a world of brittle systems, data hostage situations, and devastating breaches—all utterly avoidable with a bit of discipline and foresight.

At their core, Schofield‘s laws are about building software responsibly—creating applications and architectures that respect user agency, ownership, and privacy as inviolable rights. They‘re about shifting from a mindset of data hoarding to data stewardship, and from growth at all costs to sustainable and ethical scale.

These might seem like lofty ideals, but they‘re more actionable than one might think. By making data portability a release requirement, not an afterthought. By instituting a robust and transparent backup strategy from day one. By treating user secrets as a sacred trust and investing aggressively in security. Step by step, decision by decision, we can craft systems that live up to Schofield‘s vision.

Embracing this philosophy of mindful development is how we can best honor Schofield‘s legacy. While he may no longer be with us, his laws live on—a lodestar for anyone who aspires to build technology that endures. Not just for the sake of profits and market share, but for the dignity of the users who entrust us with their data.

That‘s a worthy goal for any developer, and one that will only become more important as software permeates every aspect of our lives. Because in the end, code is power—and as Schofield so presciently understood, wielding that power responsibly is the key to a brighter technological future.

Similar Posts