22 August, 2023

From Code to Screen (Part 1)

Explore the evolution of computer interfaces from text-based terminals to modern graphical systems. Learn about window managers, early Windows OS, and how applications render on screen.

From Code to Screen (Part 1)
Available in:
 English
 Vietnamese
Reading time: 5 min.
Table of content

    From Code to Screen

    Have you ever wondered why, when you open an app on your computer, it appears in its own window? What controls window maximizing, minimizing, and overlapping? Why does the exit [x] button look different on Windows and Mac? Why, when Windows XP lags, do windows leave those... "nostalgic" trails?

    This series of articles titled "From Code to Screen" marks my return after a long absence.

    So how do we get from code to screen? Let's explore in this series!

    Other parts of the "From Code to Screen" series:

    Once Upon a Time...

    Long ago, in the time of modern computer ancestors, computers didn't have graphics processing units (GPUs) and could only display text.

    The image below is such a computer screen. Actually, it's a "terminal":

    So how are terminals and screens different? In short, terminals receive text signals (not image signals in pixel form like screens). For example, if you want to display "HELLO" on the screen, you need to send the terminal binary data "01001000 – 01000101 – 01001100 – 01001100 – 01001111" corresponding to the 5 letters H – E – L – L – O. Where it appears on the screen, or what font it uses, is determined by the terminal.

    At this point, you might realize this is the "ancestor" of telnet? Exactly, telnet and later ssh primarily operate on this principle: instead of transmitting image signals, text is sufficient.

    But just text is a bit limiting, right? What if we want to split the screen to display two columns? And so, the "window manager" was born:

    The idea of this early "window manager" was to use special ASCII characters to "draw" the edges and corners of windows. Note, don't confuse this concept with the "Windows" operating system! Think of "window manager" as "software that helps manage display".

    Simply put, the "window manager" creates two (or more) virtual terminals, and each program can "send" text data to one terminal. The "window manager" software then handles "merging" these multiple data streams, placing them in the correct position on the screen, and then drawing the outer border lines.

    The image below shows the special characters used to draw the outer border of a window on a terminal:

    Adding Graphics

    Now it's 1985, you have a copy of Windows 1.0 in your hands, you excitedly insert the floppy disk into the drive... and it goes... click... click... whirrrr... then the screen appears:

    It's interesting that the screen is no longer just a black background with green text. Now, all displayed software consists of vibrant pixels, and one window can even overlap another.

    What happened?

    Let's look at the code file of a simple program on Windows 1.0. Introducing, a program called HELLO.C: https://github.com/NCommander/win1-hello-world-annotations/blob/master/HELLO.C

    The following code snippet will be called every time the screen updates (roughly speaking, for each frame in 30 or 60 fps):

    void HelloPaint( hDC )
    HDC hDC;
    {
      TextOut( hDC,
      (short)10,
      (short)10,
      (LPSTR)szMessage,
      (short)MessageLength );
    }
    

    This code simply displays the text "Hello Windows!" on the screen. But it's just one line of text appearing, why does it need to be called repeatedly?

    Now, let's imagine it like an "artist". If you want to draw two shapes, one red and one navy blue overlapping each other. First, you'll have to draw the red shape, then the navy blue shape:

    Now if we reverse the order: navy blue before red. The simplest way is to erase everything, then draw the navy blue shape first, then the red:

    Similarly, in essence, the ability of windows to overlap is simply because the window manager chooses which to "draw" first and which later. In early versions, the software was responsible for "drawing" its window content directly on the screen, and the window manager essentially only controlled "which first, which later".

    Later versions of window managers became more complex. For example, in modern versions, instead of letting applications "draw" directly on the screen, they let applications "draw" temporarily on a virtual screen, then the window manager combines (or what's called a compositor) them to form the final "image" that appears on the real screen.

    The examples in this article are taken from the Windows operating system, but in fact, this idea appears on all types of operating systems. For example, on macOS, this is done by the "Quartz Compositor"

    On Linux, people used to use X11 / Xorg and currently use Wayland

    On Android, people use WindowManager, which is a component built into the Android framework.

    This article has covered quite a bit of information, so I'll pause here. In the next article, we'll explore how within a program, it can "draw" a line of text, a button,... as well as the differences between native UI, web based, react native, flutter,...

    Bonus

    So finally, why when Windows XP lags, do windows leave those long trails?

    The reason is that in Windows XP and earlier, there was a time limit for a program to "draw" its content on the screen. If this time was exceeded and the window hadn't finished drawing, Windows XP would default to reusing the previous image.

    Because of this "reuse", each time a lagging window on top moves, it will "redraw" a new image of itself on top of the existing image:

    Sources

    * This article was compiled and written by me, with no parts using Google Translate. Images are either from the internet or Photoshopped by me; No images were created by AI.

    Want to receive latest articles from my blog?