Scholar, Writer, Computer Geek

My website and social media bios identify me as “scholar, writer, computer geek.” Now that I’m slowly becoming more active on BlueSky (https://bsky.app/profile/davidalexlamb.bsky.social) and Mastodon (@davidalexlamb@universeodon.com), I figured I should say a bit more about what all of that means, so people have enough information to decide if they want to follow me.

Computer geek is pretty straightforward. I’ve been programming computers (both professionally and for fun) since the fall of 1970 and have advanced degrees in Computer Science, including the PhD that got me a job as a professor. I play video games as one of my major recreational activities.

Scholar used to be straightforward: it goes with the “publish or perish” side of being an academic. If you look for things I’ve written, you’ll find professional publications from my time as a professor: a textbook and relatively few journal papers (hence my retirement at the end of 2023 at a lower rank than Full Professor). I doubt I’ll ever publish in a journal again. Am I still a scholar?

Writer requires more thought. I’ve definitely written things, but you won’t find much recently except blog posts. I’ve been writing fiction since 2006, when I wanted something to exercise my brain while on disability leave for chronic depression. It’s all in the form of partly-completed NaNoWriMo projects, plus a couple of rejected short stories. So you won’t find any of my fiction. Am I still a writer?

Mary Robinette Kowal says “you’re a writer if you write” even if you aren’t published. She discourages her students from calling themselves “beginning writers” or even “unpublished writers.” J. Michael Strazynski, on the other hand, wouldn’t have considered me one, even at the height of my professional career; he apparently has said “You’re not a writer if you write; you’re a writer if it’s the only thing you can do.” So, deciding whether I am or have ever been a writer depends on where in that spectrum you sit.

The reason I list both scholar and writer in my bio is that I am still exercising my scholarly skills, and applying them to writing. Over the years I’ve written several blog posts that amount to short scholarly essays. When I learn enough about something, I

  • find a lot of sources on the subject and read them;
  • make notes on what I have read;
  • organize them into a survey on the topic;
  • include links to all my sources so you can read them yourself and decide if my summary is reasonable; and
  • try to provide some original insight to the topic (which is what makes it “original research” rather than “a list of stuff”).

These are pretty much the instructions I always gave my students when assigning a term paper. So I’m still a scholar (aside from not having to go through peer review, which is a whole complicated Thing to evaluate).

You can find most of these on my website, such as (in reverse chronological order):

There may be others, and they’ll be easier to find when I get tag search implemented so you can look for “survey.” There are other posts about writing, but I don’t count them as scholarly unless they summarize and link to sources; opinion pieces don’t count.

Scholar, writer, computer geek: that’s me. I hope you find my blogs and social media posts worth reading.

Highway of Heroes

This past Monday was Remembrance Day in Canada, November 11, where we remember the service of our military and especially those who died during that service. Many cities have cenotaphs or war memorials where the local citizens gather for special ceremonies, and some will put aside the thoughts that arise on such a day until the following year.

In Ontario, though, there is a constant reminder that you can spot on any day you travel along Canada’s superhighway, the 401 (“MacdonaldCartier Freeway,” after two of our country’s founders), from Exit 362, Keele Street in Toronto, to Exit 526, Glenn Miller Road at Trenton. Every so often you might catch a glimpse of a blue sign bearing a poppy and the text “Highway of Heroes / Autoroute des Héros,” and you might wonder why that stretch of road, and no other, might get such a designation.

Exit 526 is the closest point on the 401 to Canada Forces Base Trenton, which has what amounts to a military international airport. Exit 375 is where you turn off the superhighway (16 lanes at that point) to get to the Centre of Forensic Sciences in Toronto. It is the route taken by convoys conveying fallen Canadian solders in the last journey of their military service.

Most memorials are created by politicians, but this one (aside from its official naming) was created by the people. Ever since 2002, ordinary citizens, many with no connection to the military, would line every single overpass on that route (or its older, shorter version) to mourn and honour the dead.

Canada has a small military, but every since the battle of Vimy Ridge in 1917, it has “punched above its weight” in both armed conflicts and peacekeeping missions. When I was a teenager, it had provided 10% of all U.N peacekeeping forces to that point, in part because former prime minister Lester Pearson had been instrumental in creating peacekeeping in the first place. We might be reluctant war-fighters, but our troops do it bravely and well.

Canada’s role in Afghanistan started in October 2001, when, secretly, our elite special ops team, Joint Task Force 2, sent its snipers to aid our American allies. Regular troops joined them in January 2002, took on a larger and more dangerous role in Kandahar in 2006, and finally withdrew in 2014.

During its time in Afghanistan, Canada had the highest per-capita rate of casualties of all coalition members, and the third-highest absolute number of deaths. It wasn’t our war, and after the American withdrawal in 2023, when the Taliban took over again within about a week, the whole thing had proved pretty much pointless. But no matter how futile the war, and how intense our feelings about conflict and the people who start it, it is fitting to remember those who died serving their country with the ultimate sacrifice.

The Highway of Heroes is a quietly Canadian way to do it.

True patriot love, There was never more. — The Trews, Highway of Heroes

Phone Games from a Systems Perspective

Once upon a time I used to teach software development (at the 3rd year through grad levels), and one of the subjects I taught was “system analysis.” Towards the end of September 2023 I started playing phone games for the first time, and it occurred to me that one subset I liked a lot would serve as a good example – not in enough depth to count as a “case study” in the pedagogical sense, but complex enough to expand students’ thinking about systems. I think it’s also an interesting example even for people who never expect to analyze systems professionally.

The definition of “system” I used when I was a Computing professor was “a collection of parts, with relationships among them, interacting with an environment across an interface.” The parts and relationships can be anything, not necessarily related to software or computers. The solar system is (duh) a system – the parts are celestial objects, and primary relationships include orbital paths and gravitational effects. The US government is a system, usually described as consisting of three parts: judicial, legislative, and executive (though, as you can imagine, there are lots more parts and relationships when you delve into more detail).

Software (Mostly) Components

So, when you start delving into phone games on an Android device, a few parts are fairly obvious. As for purely software parts, you use Google Play to download your game, and the game itself to play. Most games I looked at involve multiplayer aspects, and that requires “servers” – collections of computers running software that manages whatever interactions you have with other people. “Client/server architecture” is a big part of how the Internet works.

Those servers are a critical part of keeping a multiplayer game working, and the company that runs them needs money to keep them going; in all likelihood they see the aspects of the game that require servers as a way to generate income. A lot of phone games are free-to-play (sort of; some seem to be “pay to win”), and the money thus can’t come from paying a one-time fee as happens with desktop and console games (well, one time per update or down-loadable content (DLC)). So this introduces more parts, all related to money. This involves two “subsystems” – other systems that are parts of the overall “main” system. The two main money-acquisition mechanisms are ads and in-game purchases.

The “ad server” subsystem selects videos to play. At least, that’s what it looks like from the gamer’s perspective. Behind the scenes, there is infrastructure taking care of several other aspects of getting money to those who run the servers:

  • Google Play, which gets used if you click “install” on the ad.
  • In many cases, you get in-game rewards when you play the full ad (often 30 seconds’ worth), so the ad server has to feed information back into the game. This means communication isn’t a one-way street; it requires some form of “communication protocol” – a set of rules about what information gets passed back and forth.
  • A payment infrastructure, that gets money from the gamer to those who run the servers. This requires a mechanism to deal with money transfer, so there are relationships (and more communication protocols) with banks or other online payment systems.
  • Some means of picking which ads to serve, which may involve some means of associating with each game some information about what kinds of ads make sense, and yet another protocol between games and the ad server.

In-game purchases are simpler, and involve their own protocols with some of the same elements as ad servers. The primary protocol is a means of communicating to the payment server (often Google Pay, for Android games), which, completely separate from the game (so you don’t have to give out your financial information to a lot of companies you don’t really know you can trust), has its own protocols for letting you confirm that you approve the purchase and letting the game know the purchase is complete.

Non-Software Components

You could call everything above “the system” and be done with your analysis – and a lot of software professionals might do that. But there is always an issue of where to draw the boundary between the system and the environment. For example, where does the solar system “end”? The Oort cloud? The heliopause? The systems analyst has to make a choice; defining the boundary depends on what purpose you intend for your model of what is going on.

The word “model” is critical. There is a “fundamental law of data modeling:” the only completely accurate model of the real world is the world itself. Because of limitations of the human brain, a model always leaves some things out.

For the gamer, there is at least one more component: the financial institutions in which you store the money you use to pay for in-game purchases. Those are not just money-transfer subsystems. The way in-game purchases work, you’re likely to make a large number of small purchases in a fairly short amount of time, and the financial institutions’ fraud detection systems may flag your account as suspicious and suspend payments (guess how I know). This requires you to interact with the financial institution to (a) unlock your account and (b) prevent it from being locked again in the future. At this point in the development of artificial intelligence, that’s likely to involve talking to human beings, who are thus also part of “the system.”

Finally (at least, for the purposes of this particular essay), there is the very complex subsystem that works to encourage gamers to pay for in-game purchases: the collection of marketing and psychology experts, and their huge body of research and practice, about what motivates people to spend money, and what that “subsystem” tells the game developers (those who design and construct the software) to include in gameplay. A “good” free-to-play game gives you an enjoyable experience, but offers enticements that, for example, speed up slow activities, such as building new facilities in a game with city-building aspects, or which appeal to aesthetics, such as cosmetic upgrades to your characters or buildings. The temptations start small, with microtransactions such as a dollar or two for a minor benefit, and work up to more and more expensive bonuses. When someone makes lots of small purchases, especially while engrossed in an activity that captures their attention, they may not notice how the small purchases are adding up.

In some cases, there are things to buy that are essential to completing the game, or at least to achieve some of the in-game goals that other players are completing. People refer to this as “pay to win,” and all the gamers I’ve talked with about this hate it.

Conclusion

For someone like me, who has done system analysis for decades, picking apart how a system works is fun in and of itself. For my former students, and perhaps for future students of my colleagues, seeing how a system like this works is educational, and helps prepare them for jobs where they will help develop such systems, or other very different kinds of systems. I’d like to think this way of looking at some games is interesting for the average gamer, too.

NaNoWriMo Versus Trust and Safety

Those who don’t know history are doomed to repeat it.”

Recently (November 2023) the National Novel Writing Month website had to shut down its forums, right in the middle of said month, for a very serious child endangerment issue. The Board has posted a thoughtful response on the site, so I won’t detail what the problem was, and how they’re handling it.

Except for one issue: Who are they talking to about the right way to solve the problem?

The NaNoWriMo site is essentially a social medium for writers, with a focus on encouraging sharing experiences in writing marathons at specific times of year. Social media sites go a long way back, and the people who dealt with Trust and Safety issues for early social media, such as LiveJournal (created in California in 1999), are still around and posting about it (cw: swearing). The issues are very, very tricky to handle, and you need a lot of experience with what does and doesn’t work – which, apparently many of the recent Twitter replacements haven’t been as aware of as they should be.

I am far from expert, but after the birdsite started melting down, even I have heard of a few of the issues.

  • Compliance with laws in the legal jurisdiction where your servers live, including how you will deal with court orders such as police demands for the IP addresses of users.
  • Extraterritoriality processes of other jurisdictions (e.g. suing Board members who happen to live in, or are citizens of, or have assets in, that jurisdiction).
  • The complex process for responding to a U.S. Digital Millennium Copyright Act (DMCA) takedown (hint: you can’t just delete the offending material).
  • How your Terms of Service have to be worded, and how you will deal with violations of it.
  • Training for Trust and Safety staff – plus volunteer content moderators, if you dive into the morass that involves, instead of relying on reporting to staff.

I included the quote, and the link to many of its variations, because ignoring the past is widespread, and occasionally deliberate (not that I’m accusing NaNo of it – it’s a comment about my own field). In my research area (Computing), one of my colleagues told me that the editor of a journal to which they were submitting a paper insisted they remove all citations to sources more than 20 years old. Sometimes it’s simply not thinking to go looking: Another colleague reported that in their field they were seeing articles solving problems already solved, years ago, albeit in new contexts. I suppose it could be deliberate in some cases, since there’s a definite “selection pressure” (a metaphor from evolutionary biology that fits with “publish or perish” in Academia): It is more impressive to write a long paper about solving a “new” problem than a short letter about how you applied an old solution to a new area (which, in Computing, is harder to get published).

I wish NaNoWriMo well. I’ve been participating since 2006, and get a lot of value out of it. I won’t be suspending my annual donations (well, maybe if the Board, given time, still hasn’t addressed the primary issues in a responsible way). I hope y’all keep supporting them, too.

Last Lecture

On Thursday, November 9, 2023 CE, at 10:16 EST (by whatever network server my phone syncs with), I finished the last lecture of my university career. It wasn’t actually “my” lecture, with respect to content: my eldest offspring, Carolyn, was at a convention in Barcelona, and needed me to cover two of their lectures this week. But it was “my” lecture in terms of delivery.

Drawing Connections

You see, it’s never a good idea to just read the slides all the time. You do need to make sure you cover the material on the slide; students can benefit from the combination of visual (the information on the slide) and the verbal (the words someone speaks about the slides). So simply reading the slides still has some benefit, but so does rephrasing the material; some ways of explaining things might click with students, or some students, that other ways.

But what made it “my” lecture was what wasn’t on the slide, or even a paraphrase of it. It was things I could say, from my own experience and knowledge, that were related (at least tangentially), and which might enrich the learning experience of some subset of the students.

I am passionate about leading students to go beyond what they are focusing on in the moment, and open themselves up to a broader set of associations with other material they might already know, or what they might want to learn about once they realize there is a connection. One of my favourite technical examples is that when I teach about the Linux make program (which I have done in several software development courses), I point out the connection with what students learned in their discrete structures class: make performs a topological sort of a bipartite dependency graph and carries out instructions that annotate one of the two distinct kinds of nodes in the graph.

The subject of the lecture in COGS 100 (Introduction to Cognitive Science) was social cognition: how connections with other people affect how we think. This is something I’ve learned tiny bits about by “osmosis” – things I’ve read over the years but never studied in detail. I read the lectures in advance and asked Carolyn for clarification of a few points; I made notes on my phone with the clarifications, and with other things it occurred to me to mention. I told the students I could answer simple questions (though they never asked any, other than whether the slides would be on the learning management system eventually) and that they would have to wait until Carolyn was back for more complex questions. So, I was about as well prepared as a non-expert could be – which to me is the minimal level of competence anyone should expect of a substitute lecturer.

But what I really enjoyed was drawing connections. I don’t recall all of them, since several were things that occurred to me in the moment, but several came from my long experience as an academic. Here are a few of them; afterwards I’ll talk about how I feel about the lecturing experience.

Being a Scholar

In the Tuesday lecture I had alluded to an old article, “Artificial Intelligence Meets Natural Stupidity,” by Drew McDermott, which among other things talked about how, at the time of writing, it was common for researchers to name fairly ordinary data structures after complex pre-existing concepts that the data structure was trying to represent. But there’s a “fundamental principle of data modelling:” the only fully accurate model of the real world is the real world itself. Representing things in any model requires omitting details the creator of the model deems not relevant enough to the purpose of the model. So i cautioned the students to think carefully about how well computing terminology (not just in Cognitive Science) actually matches the pre-existing concepts with similar names. I told them how to find the article with Google Scholar.

In the lecture, I asked how many had looked it up. I didn’t want an answer and told them so, but I led into discussing life-long learning: how, when they left university, they wouldn’t have textbooks and instructors to guide them, and needed to develop the intellectual skills that would help them learn on their own. In particular, they needed to learn to go beyond the content of Wikipedia articles, to the original material on which those articles are based.

Personalizing the Topic

One set of material wasn’t an aside from me directly: Carolyn had, at my request, included a couple of slides about how most research on cognition was for neurotypicals, and autistics like me are considered abnormal. For example, supposedly we don’t have empathy – but more careful research has shown that autistics understand other autistics better than neurotypicals do, and neurotypicals understand other neurotypicals better than autistics do. Well, duh: it makes sense (and should have occurred to the researchers, if not for their bias against autistics) that people of any sort understand people similar to themselves better than people who are dissimilar.

I have become quite willing to tell people I’m autistic (which is not always a good idea), and was very happy to be able to relate something personal to the material of the lecture.

Right And Wrong Answers

There were a couple of places where technology could have made the lecture more interactive. There are systems (a mix of hardware and software) that can present a list of choices, gather answers from the audience, and show how many people picked each answer. Then the instructor can ask people to discuss with their neighbours, whether that convinces anyone wants to change their answer, and take a second poll. I mentioned the technology and said that it is useful to take the position that there are no wrong answers: that “wrong” answer is often a sign of a misconception that can be discovered, leading to a change of learning materials to guide people away from the misconception. And sometimes it isn’t even a misconception: when I teach the part of the Universal Modelling Language (UML) about relationships between classes, then ask people to group together and create a poster for a model of a specific problem, I commonly get some (perhaps most) add attributes to classes that actually represent one choice for a representation of a relationship. The real issue is that learning UML involves taking programmers at a particular level of learning to a new level requiring more abstract thinking. I needed to change my teaching approach to point that out, show an example of the ‘error,” and show them how the community of people using UML expects to represent that situation.

Feelings

I have alexithymia, common among autistics: difficulty recognizing emotions, both our own and those of others. So introspection about how I feel is difficult, and has a strong element of trying to interpret physical feelings and glimmers of mental ones, substituting analysis for intuition. But I’m fairly sure about a few of my reactions during and after the lecture.

I felt fairly comfortable and confident in the second lecture; in the first I had been a bit rusty. My lecturing skills have never been great but rarely bad. For most of my career I expect my lectures would have been described as fairly easy to understand but a little boring. There may have been a few flashes of brilliance – a few former students have told me of some – but for the most part I wasn’t as inspiring a lecturer as some of my colleagues (though perhaps a little better than a very few of them). It felt good that I hadn’t declined much in skill over the COVID gap.

I felt a little sad about leaving lecturing, but not a lot. I’ve always enjoyed finding good ways of explaining things, and I can keep doing that, in textual form. I can write short blog entries, and may be able to write long books now that I’m freer to do so.

I’m glad Carolyn gave me this one last opportunity.