Reading10: Right Place, Right Time

Linux Torvalds didn’t exactly expect Linux to do what it did. Moreover, he didn’t exactly want to deal with all of the business issues that came about because of its success. He mentions in his book that “I had no interest getting involved in this.” At least at face value, Linus didn’t get into Linux to be a business mogul, and he loathed many of the things involved as a result of its success. That’s what makes his success so confusing, and perhaps more respectable as a result. If we assume that Linus hasn’t been putting on an “unassuming business-naïve nerdboy” façade for all these years, hiding his true capitalist mindset while conspiring to become a millionaire just below the surface, but I think that’s doubtful. More likely, he is just like that.

I have to think that Linux’s success is a kind of one-in-a-million fluke that is doubtful to happen again. The issues with its repetition are two-fold. First, with all the money in the technology industry these days, it seems much more likely that an industry disruptor will be funneled through the moneyed channels rather than one that is open-source. Second, I don’t see many holes existing in the technology world these days that there was back then with the lack of an open-source kernel. Operating systems are the basis of most user-friendly technology, so they’re remarkably important. What else could be made that carries such necessity in the live of tech users?

I think there will always be successes in the open-source field, because there are countless places where the paid software market comes up short. There are new small things being worked on daily that we saw presented for Project03. None of these projects are remarkably important in the lives of many (save Firefox, perhaps, but even then, most people use Chrome). However, mentioning Chrome brings up an interesting point, since Chrome was built on WebKit, which is open-source.

With this said, there seems to be two possible ways in which open-source applications will be successful in the future.  The first is that open-source will live as the backbones of much more successful proprietary software, like WebKit for Chrome. This seems unsustainable for obvious reasons.

The second possible route is what I mentioned above: open-source filling niche requirements for a more limited, but more committed, user base. This is what things like MuseScore and Blender and i3-gaps work towards. The average laptop user will never interact with any of these projects. However, if you’re a user of technology yet still not a computer science major, there’s still a decent chance you’d use one of these projects, like if you’re a musician who doesn’t want to pay for proprietary music software, as an example. As the world of technology becomes more disparate, fragmented, and niche (in some ways, not all), the opportunity for open-source software to fill in those gaps becomes more realistic.

So, no, there will likely not be a success like Linux in the future. But that’s okay, because the technology world is larger than it ever was, and a disruptor would have to do much more than Linux already did.

Reading09: Against Weird Tech Luminaries

After reading about Linus Torvald’s upbringing and general demeanor in childhood, his ending up creating Linux makes a lot more sense. In general, most of these tech luminaries are introverted people with some serious social aversions. That’s not meant offensively, but it does seem that, more often than not, these people develop their technical prowess due to the free time allowed by not being with friends. He says in the book, “Helsinki kids are playing hockey and skiing with their parents in the woods. You’re learning how a computer actually works.” This is a very clear sign that Linus’ priorities were in a very different place from the average person his age.

I just don’t really understand why this stereotype is so rampant in the tech world. Why is it that you have to be an introverted, quiet, awkward as ever person to make something important in the tech world? Linus fits literally every stereotype of the ‘hacker.’ I don’t know why this upsets me so much, but it really does. Linus’ upbringing is so interesting to me because it confirms the idea in my mind that you have to be bad at social interactions to make something meaningful in technology. Rather, it’s definitely the rule rather than the exception.

It would seem that Linus’ demeanor growing up was remarkably similar to Bill Gates, because he was definitely a weird dude too. I’d be remiss if I didn’t link this wonderful video of Bill Gates being the most awkward human being on the planet:

Like, what’s the deal? Why are all of these people like this? Why is it that computers make you unaware about how the rest of the world works? I’m being so, so judgemental right now, and I’m also clearly ignoring my own inability to have a successful social life, but that’s slightly besides the point.

Personally, I wouldn’t like my story to be defined by my contributions to something bigger in the tech world, like it is for Linus or Bill Gates. I don’t think I fit in very well in the “hacker” world, and doing something in the way they did would be miserable to me. It’s not my way to stare at code for days on end to discover a cool solution to a problem. It’s my way to do what’s asked of me, at least related to computer science tasks.

I want my story to be someone who doesn’t let their job define them as a person. These people are nothing without their technological feats and accomplishments, and that’s fine for them. There are just other things I care about than what cool contribution I make towards the computing world. I know this sounds so antithetical to what I probably should believe, because I’m going to be spending a lot of my life doing computer science stuff. But, I don’t see it as a problem. Jobs are jobs, and to me, they aren’t anything more. To sell soul to a capitalist industry like these people have would be against myself.

Reading08

As we talked about in class, and as ESR describes in “The Magic Cauldron,” most software is used as an intermediate good, that is, as a means of doing something else. This is one of the two uses of software, the other being as the object sold. He says that “In other words, software is largely a service industry operating under the persistent but unfounded delusion that it is a manufacturing industry.” This was likely true when ESR wrote this essay, but it is most definitely true today. Pretty much all of my friends who are graduating this year are going into jobs that create software for the purposes of a different product. One is working for a health insurance company, developing software to manage the insurance. Another is working for a financial institution, which was ESR’s first example of “in-house” software. It is the case that unless you’re developing a niche user application, it’s very likely that you’re just going to be developing an intermediate good.

I believe that the key word there is niche, and that since this is the case, it’s tough to see a world in which the Open Source model makes sense in business terms. In the modern world, the most valuable software products (Alphabet’s search engine and analytics tools, as an example) are heavily guarded and it is impossible to really glean how they do it so well. Web Science experts (Tim) have a pretty good idea, but I’m fairly certain that there’s a lot Alphabet is doing that we will never find out. I think I might be misinterpreting ESR’s point to a certain extent, but it still holds that for the largest products, proprietary-ness is one of the most important aspects of its profitability.

ESR does go through some examples of where Open Source is an effective business model, but these are for smaller products that probably don’t have much value in the real world of being sold. One example he gives is of Cisco’s print-spooling software, that allows disparate buildings to send and receive print jobs. They never intended to sell this software, or even package it in with other services, so it made sense to make it open-source.

It may seem after this that I’m arguing with myself, but I just thought of a way in which open-source software might make sense for businesses. In the modern economy, companies like Cisco aren’t selling software as much as they’re selling services, like management and installation. By making their software open-source, they can effectively cut development costs while getting the same amount of money from the management they provide to other companies. As long as the open-source model effectively keeps their software up-to-date, then it’s not the worst idea to manage software like this. Again, though, for a lot of companies, the software is the crown jewel, and any compromise to that would be harmful to profitability. I’m not sure where the percentages are, but if software companies are moving towards a service model, then open-source might possibly be the answer. I’m sorry I was terribly indefinitive on this.

Reading06: Cathedrals- A Product of Their Time

In his essay “The Cathedral and the Bazaar,” ESR paints a dichotomy between two possible routes of software development: that of cathedrals, “carefully crafted by individual wizards or small bands of mages working in splendid isolation,” or that of bazaars, “of differing agendas and approaches […] out of which a coherent and stable system could seemingly emerge only by a succession of miracles.”

ESR believes that the bazaar-style development process is effective, since it follows the principles that are used to build great projects already, such as code reuse and retooling. He uses the example of Linus Torvalds, who based Linux on already-existing Minix code, and also how he based his POP utility off of a utility that already existed, just without the feature he wanted. This is the collaborative energy he describes, and this would not have been possible without being able to see and interact with the project at its rawest form, before or concurrent with release.

I believe that the cathedral system of project-building in software development is dated, and comes from a time where knowledge of computers, programming, and technology was dated. When cathedrals were being built hundreds of years ago, the average serf had no knowledge of materials science or physics or interior design or religious rules of cathedral building. As such, they wouldn’t have been able to contribute very much to the construction of the grand cathedrals. I’d say it is about the same today, that if an architect asked for input on a cathedral, I’d have quite little to say in the grand scheme of things.

However, it has become different for technology. Back in the 1960s, there was a miniscule percentage of people who really know how coding worked, and most of them were holed up in Bell Labs or MIT or some other enclave of technological alchemy-like wisdom. In this case, it makes perfect sense for all technology to come out of these spaces, since the average citizen wouldn’t even know where to start in relation to, let’s say, a new operating system or mail service.

As time went on, however, more and more people were introduced to this technology and took an interest in it, even as a hobby. This meant that more and more people were able to code at the level that would be expected and required to contribute to a project, like Linux. Thus, when Linus opened up Linux for community contribution, it just made sense. He knew that there were hundreds or thousands of people at the same level of knowledge as him (even though he may not admit it), and that, by crowd-sourcing bug fixes and feature additions, he would be able to create a product comparable or better than what some cathedral builders might make.

Of course, there are dangerous pitfalls involved in making a product open-source. There needs to be an effective review process in place to make sure additions to a project are effective and bug-averse. There also needs to be a way to send additions easily and effectively, and that’s where git comes in (I think).

Reading04

For the first years of college, when I would come up against a problem in any of my programming assignments, I would tend to consider and try to solve the problem using C++-style syntax in my pseudocode. This was because C++ was the first language I learned back when I was a freshman in high school and my dad was convinced that I needed to know how to use it. I took everything from a perspective of, “how would you implement this in C++?” However, as I continued on into classes like Systems Programming where Python would appear more readily, I figured out that thinking at a level above of C++ is insanely easier. No longer did I have to worry about memory when constructing functions and for loops and whatever. Sure, there are benefits to having that fine-grain control over the system, but whatever it is, I haven’t needed it.

Nonetheless, I still have such a soft spot for C++ and will never fail to defend it when, inevitably, someone will say how miserable it is to debug arcane error codes during compilation, or, heaven forbid, when a segfault occurs. All this is to say that my preference for C++ has nothing to do with its purported advantages over other languages. I’m of the opinion that choice of programming language does not matter unless in specific cases, such as when your device maybe only supports a Java Virtual Environment or something like that. In the end, it really matters how easy you want it to be to program your thing, or if you want it to run faster. 90 percent of the time, I’m gonna choose Python since it’s by leaps and bounds easier to write then something like C or C++.

Paul Graham argues in “Beating the Averages” that his use of Lisp gave him a competitive advantage over his competitors. According to Graham, “If other companies didn’t want to use Lisp, so much the better. It might give us a technological edge, and we needed all the help we could get.” In the essay, he says that using Lisp was an “experiment” to see if using a language could give you an edge against competitors. Graham said it did, since his site was, according to him, “always far ahead of them in features.” This argument seems kind of bogus to me. Of course he thinks that his product was better; his was the one that was his and his was the one that made millions.

However, this heavily discounts the effect that luck has in the business world, and there’s no way that anecdotal evidence that a programming language gave Graham the advantage is enough to convince me that it matters in the long run. As pointed out in class, most successful tech entrepreneurs attribute their success, in large part, to luck. There are hundreds of people trying to make it big in the industry, and it’s almost impossible to say that the successful ones just work a bit harder than the unsuccessful ones. That’s totally unfair to the people who break their backs for years, only to see no return in the end. Graham wants to make it seem like it’s his “cunning” or “intellect” or “foresight,” but really, it’s just luck, and no choice of programming language is going to make that big of a difference.

Reading03

It would seem to me that Paul Graham’s vision of a hacker is compatible with Steven Levy’s view of a hacker, at least in his essay “Hackers and Painters.” Paul Graham views hackers as people who create things and activate the creative parts of their minds to do something new and interesting.

Graham says in the essay that when Yahoo bought his company Viaweb, they asked him what he wanted to do. He responded with “I had never liked the business side very much, and said that I just wanted to hack.” The early hackers of Steven Levy’s book were of the same vein as this mindset. They were hacking for the enjoyment of it, without any worry about business and profits. They just wanted to create and break things and then put those things back together in a more elegant way.

Well, while on paper, what Graham is saying is compatible with Steven Levy’s hacker, I think looking more deeply into it will show a few issues with what Graham says and the results of history. Paul Graham became super rich from the sale of Viaweb. At some point in his life, he had to have thought about money as a goal, rather than just the enjoyment of hacking. Sure, he probably has tried in his life to perfect his style of hacking just like a painter perfects their style, but it’s hard to believe that he did it all without the goal of profits in his mind.

I think Paul Graham accepts this difference between a person like him and the hackers of the 60s. This is because of his view that a hacker is like a painter. Steven Levy’s hackers were tinkerers. Just like they messed with the model railroads, they messed with the computer. In their minds, there was never any conception of a profit to come from what they were doing. But, professional artists like painters could be able to anticipate a source to come from their art if they continue to work for years and years. Paul Graham tries to portray the hacker in the same way.

Paul Graham says that hackers, just like other artists, adopt “day jobs” to give them income while they work on their passions during non-work hours. He says that since hackers are “makers,” then it must be true that hacking is an art for them, rather than a science. And, like artists, to be a hacker, you should be making as often as you can, to perfect your art.

This “day job” mentality to hacking is such an odd one to me. I suppose this is because I view computer science jobs as an engineering job, and while, sure, I can work on a side project which I’ll be interested in, I don’t view that the same as the music I play or write. The problem I think I have is that when you work on a programming thing, you’re almost always creating a tool and not a thing to be enjoyed. We talked about this a lot in ethics, so I won’t elaborate here, but, for example, when you’re creating a video game, all the things going into it (art, music, narrative) make the art, but the programming itself is just a method to make it happen.

Reading02

There’s an odd kind of dilemma involved in the conflict between the hacker ethic and the possibility of making tons of money. The original hacker ethic espoused and practiced by the MIT engineers seemed to carry with it a kind of elitism; according to them, assuredly, hackers were upper-middle-class white men who could be trusted to spend 14-hour days programming and messing around with the computers their universities offered to them. In this way, it almost seems like the era of “Game Hackers” brought with it the opportunity for more people to be hackers. This was the same occurrence during the era of hardware hackers, but it just continued the trend.

I’m still of the opinion that bringing computers and programming to more people is not against the hacker ethic, and that the promise of monetary reward for bringing games to people is only right. Like we talked about in class, it’s a terribly elitist and privileged view to say that software should be made just for the greater good and not with the expectation of reward. Although people like Bill Gates may have taken the idea a bit too far, there needs to be some way for a programmer to make things without having to be rich in the first place. If someone has the skills, ambition, and ideas to create something, they need to be rewarded for it. Else, only the already wealthy can create.

It’s also tough to say which type of programmer is preferable in this our modern world. To be a “goal-oriented engineer” is to provide usable and enjoyable things to the most people, while also generating sustained profits to the corporation you work for. On the other hand, to have “the love of computing in your heart” is a way to provide fulfillment to yourself that may not be possible for the goal-oriented programmer.

In my opinion, if you’re a person who is obsessed with programming and perfectionism and learning, then more power to you. You have every right to make programming your passion. However, someone like Ken Williams may take that passion too far. Obviously, Ken and Roberta Williams did a lot for the computer games industry. It’s just that the way the book describes Ken makes him seem like a jerk. The book says, “No matter where he worked, in any number of nameless service companies in the yawning valley above Los Angeles, Ken Williams did not meet one person who deserved an iota of his respect.” This description of him fits with a fair amount of passionate, self-righteous programmers I know. No level of skill or fame gives you the right to act in such a way, or to believe so highly of yourself.

So, for me, it is much better to be a professional engineer. I enjoy coding, hacking, programming, and the like, and can spend hours trying to solve  problem. But, I don’t base my self-worth on my hacking skills and judge myself relative to the programming skills of others. To be a hot-shot programmer often means building your identity around programming, and I just can’t do that.

Reading01

It is clear that when the “Hacker Ethic” reached the West Coast in San Francisco, its ideals were appropriated and altered slightly to fit the hippie culture of sharing and communicating of that era in that area. The Community Memory system is a great example of this. Community Memory provided a way for people to use technology and communicate with each other, no matter who they were. This differed greatly from the “True Hackers” at MIT. There, they believed that they followed the Hacker ethic, which included the idea that access to technology should be unlimited. However, I believe they failed in this respect. All of the “True Hackers” had some connection to MIT, whether it be going there, working there, or having a parent working there. This was a privilege that most people didn’t have. As such, even if someone wanted access to technology, there was no guarantee that they would be able to do it.

On the other hand, in San Francisco, the Community Memory terminals were accessible by all in a public space. The Community Memory system was probably most people’s first interaction with a computer system. Even though the people who used it probably were not hacking it, they did get firsthand experience in figuring out how the machine worked and how to use it to its fullest advantage. In providing Community Memory to the public, the Hacker ethic of unlimited technology came one step closer than it did during the “True Hacker” era.

Community Memory was just the first foray into bringing the Hacker ethic and computer technology to the greater public, as opposed to just keeping it in a small, isolated and privileged group. With this move to more wide usage of the Hacker ethic, other questions come up. To bring technology to the masses, to truly fulfill the Hands-On Imperative, other components of the Hacker ethic may be abandoned. The “Hardware Hackers” of the 1970s quickly considered abandoning the principle that “all information should be free.” Bill Gates, who sold a BASIC interpreter in the 1970s, wrote an angry letter in a Homebrew magazine targeted towards those who were spreading his software without paying him. He considered it wrong and tantamount to stealing; all the work he completed was meaningless in his mind unless he got paid for it. This was ironic, of course, since he himself got the original interpreter code from another, already working interpreter.

Thus comes the conflict between the original Hacker ethic and where the “Hardware Hackers” brought it. It is true that money is necessary for a company to expand, and it is only with the massive expansion of companies like Apple and Microsoft that the personal computer revolution occurred and brought computers into the hands of a lot more people. It is also true that this yielding to a more business-like mindset made more things proprietary, and allowed more information to be hidden behind walls of encryption and payment. People could no longer hack their computers the way they wanted to, the way in which these computers were invented in the first place.

It matters, in the end, that the most people get access to an uninhibited form of technology, so that they may learn about it and learn how to improve it. However, there needs to be a middle ground between offering technology for everyone and putting the most important parts of technology behind DRM, encryption and paywalls.

Reading00: The “True Hacker”

In his book Hackers: Heroes of the Computer Revolution, Stephen Levy describes a “True Hacker” as someone who puts the craft of computing at the front of her priority list, no matter the situation. Like an artist, they are singularly committed to this craft, and they spend countless hours to perfect their skills and create new artifacts which push the medium, in this case, a computer, to its limit. They want to discover anything and everything a computer can do, and will subvert rules and expectations to reach that goal.

Throughout the first part of his book, Levy describes the first computer hackers, a group of students at MIT in the late fifties and early sixties, who discovered computing and saw it take over their lives, for better or for worse. People like Peter Samson and Alan Kotok, students at MIT who began to gain a fascination in the new computers the university was acquiring, are the archetypes for Levy’s “hacker.” For the most part, they ignored the classes they were in and any sense of a social life in pursuit of code bumming and discovering what a computer was capable of. Levy describes the kind of awe and reverence other “hackers” had when a student discovered how to reduce a decimal printing program to below 50 instructions. To the average person, something of this nature is bizarre and meaningless, but to them, it was an obstacle, and they gained a sort of respect for the individual who cracked the code, so to speak.

Levy describes the countless all-nighters that these “hackers” would commit themselves to, all to finish or debug a program. He describes one scenario in which a new computer was delivered to MIT, a PDP-1. One of the students was convinced that a new assembler would take too much time to complete and that it was better to just use the one included with the computer. Some other students, the clear “hackers” in the story, promised the other that they would be able to complete a better one within the weekend. They followed through on this promise and had the assembler working by Monday.

The previous story is what, in my mind, constitutes a “true hacker.” It is someone who completes a task because it is there, like Sir Edmund Hillary ascending Mount Everest, except that this mountain is in a darkened basement and is surrounded by old take-out food containers.

However, I don’t think I wish to be a “true hacker” as Levy describes it. My passion is not computing, however much I enjoy it as a major and as a future career. I can see myself spending my 9-5 every day working in the field without feeling as if I have to spend the rest of my afternoons and evenings and nights and sunrises doing it as well. My passions most certainly lie elsewhere. There are many other things I would like to do to divide up my time, like playing and writing music, playing games with friends, exercising, and most importantly, sleeping. There’s a large divide in my mind between myself and what Levy describes as a “true hacker.” Perhaps one day in the future, when I’m coding up a storm, I will receive the sort of satisfaction and enjoyment that Levy shows that these “true hackers” experience, and in that moment, I will understand the hacker’s high he describes. At that moment, I will commit myself to becoming a “true hacker.” Until then, I’m perfectly fine with doing computing well, not becoming a “true hacker,” and enjoying the other parts of life that are afforded to me.