The future of hypertext may involve sharing and recycling resources.

The technology of hypertext is devastating to lives and our environment. In contrast, trees used for printing books might come from a plantation forest, where they are grown and harvested for purpose. But in order to distribute information globally, quickly, books are simply not an option. I'm unaware of printed text coming close to the power of computer-based hypertext. The benefit of hypertext is unlikely to be outweighed by any lower-fidelity medium.

What then would a future look like if we were to minimise the environmental cost of operating devices and global networks in order to deliver our hypertext system?

Our current devices are designed for much more than what is necessary to power hypertext. Our computers are capable of some unimagineable feats. Rendering photo-realistic environments in near or real-time. Detecting and categorising information in complex images and videos in real-time. Streaming and encoding or decoding incredibly high-definition live video. These are only a small selection.

The continued manufacture of new devices requires more expensive, dangerous to source minerals. Hypertext does not need a computer from the future. We can recycle existing machines, we could even repurpose network modules into computers that may have never had one, turning them into hypertext machines. The simple computers of the past and present are very capable hypertext machines, the caveat is the creation of efficient software to ensure usability and efficiency. We can always improve on what we have.

Is there a safe energy source? As far as I know, "cleaner" energy sources have some dependency on destructive practices, like the mined minerals and manufacturing processes for solar panels. Batteries are massively dependent on destructive mining practices. Would it help to decouple the battery from the device? Does it reduce passive consumption of energy? How can we improve awareness of how batteries work and how we source energy?

Although batteries have limited lifespans, the usage of a device determines this length. We need to learn how to get our computers to consume less energy. What processes are required to run a hypertext machine? Can focusing solely on hypertext reduce the amount of power a device draws, and therefore how long batteries last in the portable devices? Batteries last longer in devices with minimal demand at the hardware and software layers. Will we grow weary of the stimulated distractions of glamorous GPU/CPU intense applications, just as we have done for junk-food, as the cost of the batteries, devices, and energy increases?

For the quantity of text a single person can read, it is not intensive at all for transmission over networks. It compresses very well, and is often used to structure and define more complex forms of data. If there was a more efficient way of communicating, we wouldn't use language, and by extension the text representation of it, as the primary mode. Text is much easier for computers to handle than other media like images and audio, therefore, this common method humans use to communicate with each other can be reused to communicate with a computer, with minimal overhead. Support for languages in character encoding, usability of text input and output, and user literacy determines whether it's more convenient to use text over other communication methods.

Text is static, it is rendered once, and doesn't need special processors and APIs to generate it. However, representing text in a way that is readable in some scripts requires somewhat more intensive rendering than ascii. To be usable, a device must support a given language, provide a font, and have software to render the text. Doing this efficiently is a basic requirement of almost any device. The key is to exploit this design for creating and rendering hypertext.

Screens are a necessity for reading and writing hypertext. However, they can use a lot of energy, are difficult to manufacture, expensive, and are highly likely to be replaced within the lifetime of the device. Having higher pixel density on a screen makes the text easier to read, and higher quality screens generally provide better contrast, which is also better for text. But this higher quality means they are more demanding in manufacturing, cost, and energy. Higher resolution means more computational power to render information onto the screen, so the parts for the computer become more complex, expensive, and require mining more minerals to produce. There is probably an upper limit to the quality requirement where the differences in readability are imperceptible.

E-ink displays are low-energy ink-on-paper simulators. These displays use less energy as they only need to refresh once to persist an image onto the screen, then no additional energy is used. Text is static, not animated, so it only needs to be rendered once. Hypertext interactions are single taps of static text, and don't need interaction feedback animations. E-ink is used for devices known as e-readers, and the files used on these devices borrow their format from HTML. They are designed for hypertext-like interactions. A downside of e-readers are their slow refresh rate make them less suitable for writing and coding than notebooks, tablets, and phones. Display innovation is likely to continue, but it should be kept in mind, what are the limits to our needs for screens? For new hypertext devices, screens are where we should be spending the most money to get the best in terms of quality and efficiency, and value them accordingly, as they are necessary for the whole experience of hypertext.

Portability is pivotal to the adoption of computers and, vicariously, the Internet and hypertext. It's absurd to imagine someone carrying a large backpack filled with a computer, only to need it to be plugged in somewhere. Portability does have its own unique requirements. Lightweight components are essential to building such devices. Nothing performed by the device should require large amounts of energy. Data transfer should be minimised. Text can be transmitted very easily, and compresses well. Having to load anything over a network beyond what is required to fetch and render hypertext should be heavily scrutinised. Data transfer over radio requires large amounts of energy by the telecommunications network to provide this service to every person it can reach.

People often pay for their own Internet connection, therefore the end user is responsible for the amount of data being transmitted, and for the amount they will need to spend to access it. The exact quantity per application is out of the control of the end user, as they generally don't build the browsers and websites, or other software. If a user is encouraged to access hypertext instead of other media, and the hypertext doesn't come with baggage of additional resources, then the data requirement is limited to the bytes required to send the textual information along with some markup. Apart from minimalistic encoding of characters, textual information compression relates to linguistics and quality of writing, which goes beyond technology and into the realm of practice and theory of writing. Software that consumes a large amount of resources when rendering hypertext is irrational. Protocols like HTTP define no upper limit to the amount of data they can transfer, so they must be implemented thoughtfully.

Minimal hypertext doesn't need layout markup. To produce the information we consume, it costs money. To cover those costs, there are ads on the websites. Modern web uses layout markup to make websites more attractive and allow for effective advertising. Ad blockers and legal restrictions focused on ad targeting correlates with the increase in paywalls and banners requesting compliance with tracking. Tracking tools, banners, paywalls, and ads add extra data to a request. If a user pays for content, this doesn't beget a minimal experience. The modern web is obviously popular and unless it becomes uneconomical, will remain how it is indefinitely. It is still possible to build websites with minimal HTML and provide this relatively inexpensively, as well as render these webpages with non-major browsers optimised for text and efficiency.

Local Internet proxies may reducing the load on the larger infrastructure and reduce costs to end users. Alternate protocols and networks, as well as hypertext-optimised hardware and software, without limitations other than utilisation and adoption, hint at the plausibility of low-cost hypertext infrastructure. How these alternatives will be adopted and persist is probably unpredictable. But making it easier to find and adopt alternatives to high-cost complex infrastructure is necessary to make it possible. There are various ways to implement alternatives, so building them is probably less important than distributing the knowledge of how to do so.

Hypertext requires a computer device with a screen powerful enough to render and write text, networking-capable modules, and a power supply. Everything else is implementation. Code only needs to be written, compiled, and distributed once. A computer can store and continually render new text it accesses over the network. A network protocol only needs to be defined and implemented once. The knowledge will change and evolve and it needs the collective to iterate. The ideas must evolve and change and they can be reused and transformed endlessly with an appropriately designed device. Hardware has an upper limit to reusability and this can be incredibly high, longer than a single lifetime of its user, when designed appropriately. Good design doesn't arise by building all design variations and comparing them. Existing tools can be used to ideate and design better ones, without having to build something immediately. This process reduces the overhead of the cost of iteration, which is, in my opinion, the greatest cost in design. Reduce the cost of iteration and designing something good will become cheaper.

Predicting the future is impossible. We can make obvious statements about near future, but it's difficult to be accurate without luck, power, and circumstance. Instead, we can make statements about how we plan to use our current resources.

E-ink is promising, especially if refresh rate increases or we design a UI to overcome this hurdle. Costs for screens might not decrease, but this cost is considered within the context of a complete device. The total cost of all components would be adjusted proportionate to the intended use of the device and the quantity manufactured. External portable power sources could be another plausible design path, and is already commonplace. Wi-Fi modules are the minimum requirement for portable networking. Network access could be a router network, a portable telecommunications network "hotspot" device, or something larger scale. Built-in SIM-card GSM modules are not a necessity, and hotspots are commonplace already, especially in Japan. What makes this deconstruction interesting is the power, screen, networking, and even other elements such as storage, can be modular and not embodied within a single device, inseparable from the core computer. In fact, Raspberry Pi led the way in this regard, and provided a consistent design and platform allowing others to provide other components to fulfil the requirements of a application-specific, completely modular device.

Smartphones, the most ubiquitous portable hypertext devices we use today, contain this GSM module to keep it connected to the Internet. It is only useful via a paid subscription, stop paying and you will end up relying on the Wi-Fi module to communicate. The acquisition and usage of this service ties a personally identifiable device to you, your Internet usage, and your immediate location. The future might be more paranoid and less willing to use such infrastructure, despite the convenience.

Devices utilising an alternative infrastructure providing proxied access to the Internet, offering free access to less resource-intensive, locally-relevant hypertext is a likely scenario. Today's culture and infrastructure also requires paying for various services to operate servers, using web browsers developed by a small number of large corporations, paying for domain names, and relying on large centralised services for all of these components to talk to each other.

Hypertext can be local, free to access and share, made available to others from your device, distributed, and without middle-men. Advancements within this space can be achieved if there's enthusiasm to explore the potential and share knowledge. One major complexity with alternative systems is hyperlinking. Some P2P system designs might provide solutions, such as transcending the "pointer to URL" mechanism with content-addresses for resources in a tracker-torrent-style model of distributed file networks.

When considering the future and solutions to current problems, modularity and interoperability appear to be prevalent themes. The issues we face arise not from the lack of options, but from the lack of knowledge and pragmatism within the greater population. Building portable computers from modular components, recycled and bleeding edge, and using a terminal to surf an anonymous alt-protocol, while using only a keyboard to click through hyperlinks, re-seeding interesting documents, all the while connected to a satellite-proxy local mesh network, seems like an abstract and fantastical task for the "non-technical" types. The way our current web works is: people buy phones, they have a browser pre-installed, and we primarily surf via search engines and social media.

A reader of this is unlikely to do much beyond what a researcher, educator, or hacker is capable of, but not what a government, monopolistic telecommunications company, or rare-earth mineral mining magnate could. A future-looking attitude for this individual could involve avoiding the waste of usable devices and provide others with knowledge on how to reduce costs for accessing the Internet and publishing hypertext. Share and recycle resources. If our devices were to stop being produced we should know how to use what we have. If we lose access to the Internet we should know how to build and connect local networks. If we can't continue to access knowledge we should try to store, mirror, and repurpose as much as we can and continue to write about what we know.