Definitions of “Code” and “Programmer”: Response to “Please Don’t Learn to Code”

December 20, 2012 at 10:21 am 34 comments

Audrey Watters’ excellent post on Learning to Code in 2012 pointed me to Jeff Atwood’s piece (linked at the bottom).  I want everyone to learn code, so I am in direct contradiction to his position, “Please don’t learn to Code.”  Jeff and I disagree primarily on two points, both of which are issues of definition:

  • Most people who write code are not trying to create code solutions.  Most people who write code are trying to find solutions or create non-code solutions.  By “most people,” I do mean quantitatively and I do mean all people, not just professional programmers.  We know that there are many more people who write code to accomplish some task, as compared to professional programmers.  When I visited the NASA Goddard Visualization Lab last month, I met the director Horace Mitchell, who told me that everyone there writes code, whether they are computer scientists or not.  They write code in order to explore their data and create effects that they couldn’t given existing visualization systems.  They are trying to create great visualizations, not great code.  They simply throw the code away afterward.  This is a critical difference between what Jeff is describing and what I hope to see.  We agree that the goal is a solution.  I want everyone to have the possibility of using code to create their solution, not to create code as the solution.
  • Most people who program are not and don’t want to be software developers Most of the people that I teach (non-CS majors, high school teachers) have zero interest in becoming programmers.  They don’t want to be “addicted to code.”  They don’t want a career that requires them to code.  They want to use coding for their own ends.  Brian Dorn’s graphic designers are a great case in point.  Over 80% of those who answered his surveys said “No, I am not a programmer,” but everyone who answered his surveys wrote programs of 100 lines or more.  Not everyone who “programs” wants to be known as a “programmer.”

The problem is that we in computer science often have blinders on when it comes to computing — we only see people who relate to code and programming as we do, as people in our peer group and community do.  There are many people who code because of what it lets them do, not because they want the resulting code.

“You should be learning to write as little code as possible. Ideally none.”  And people who want to do interesting, novel things with computers should just wait until a software developer gets around to understanding what they want and coding it for them?  I could not disagree more.  That’s like saying that the problem with translating the Bible is that it made all that knowledge accessible to lay people, when they should have just waited for the Church to explain it to them.  “Please don’t learn to code” can be interpreted as “Please leave the power of computing to us, and we’ll let you know when we’ll make some available to you.”

——————–

It assumes that more code in the world is an inherently desirable thing. In my thirty year career as a programmer, I have found this … not to be the case. Should you learn to write code? No, I can’t get behind that. You should be learning to write as little code as possible. Ideally none.

It assumes that coding is the goal. Software developers tend to be software addicts who think their job is to write code. But it’s not. Their job is to solve problems. Don’t celebrate the creation of code, celebrate the creation of solutions. We have way too many coders addicted to doing just one more line of code already.

via Coding Horror: Please Don’t Learn to Code.

Entry filed under: Uncategorized. Tags: , , , .

Will.i.am: I want to write code! The Bigger Issues in Learning to Code: Culture and Pedagogy

34 Comments Add your own

  • 1. rdm  |  December 20, 2012 at 10:50 am

    I would change ” You should be learning to write as little code as possible. Ideally none.” to You should be learning to write as little code as possible. But you should aim to write efficiently, not eliminate coding entirely.”

    Unnecessary code is evil. Necessary code is good. Minimizing the volume of code is a plausible way of characterizing this eternal struggle..

    Reply
    • 2. Mark Guzdial  |  December 20, 2012 at 10:56 am

      I strongly disagree, Raul. If it takes an end-user more lines of code to understand what they’re doing and get it done, that’s completely acceptable, and CERTAINLY not evil. Stop placing the standards of professional software engineers on end-user programmers. That’s like saying, “If you can’t make your grocery list sound like Shakespeare, it’s not worth writing.”

      The quote you’re citing is Jeff’s, not mine. My opinion is that it’s completely wrong. Instead, what I say to end-users is: “Write code. Write code that makes sense to you. Write code that lets you do amazing things. Don’t let arrogant and judgmental computer scientists stop you.”

      Reply
      • 3. rdm  |  December 20, 2012 at 10:58 am

        Ok, but if they really need those lines to accomplish that end, it’s hard to claim that the lines are unnecessary in that context.

        Reply
    • 4. rdm  |  February 20, 2013 at 8:01 am

      I just wanted to reiterate that reiteration can sometimes be a bad thing. Or, something like that — as I see it the problem is not “coding”, the problem is “dead code” (code which does nothing) and “broken code” (code which does not do what it is supposed to do) and “elaborate, inefficient code” (the bane of my existence – code which does something right but which spends most of its time doing something else entirely different). Malware can be thought of as a special case of bad code.

      The problem, I think, arises when people confuse “lots of code is garbage” with “code is garbage”. These are two incredibly different concepts.

      On the one hand: we do not have a problem judging english based on the quality of the writing, why should we treat all code as equivalent?

      On the other hand: people should not take criticisms as discouragement. They should instead take them as encouragement. Unfortunately, however, this does not adequately describe how people react.

      Reply
  • 5. mrstevesscience  |  December 21, 2012 at 1:04 am

    So I really like your comment:
    “The problem is that we in computer science often have blinders on when it comes to computing — we only see people who relate to code and programming as we do, as people in our peer group and community do. There are many people who code because of what it lets them do, not because they want the resulting code.”

    But as an “arrogant and judgmental computer scientist” I would modify it slightly to:
    “what it lets them do easily and quickly”

    Now I am not saying folks shouldn’t work hard to get something they want (they will if they really want it). I am all for “hard fun,” but too many things that should be easy are too hard. Ideally I want to be able to go from idea to conception as quickly as possible. I realize I will have to struggle along the way, playing and debugging as I go, but I don’t want those struggles to because of the programming environment. I have too many bugs in my mental models and I don’t need extra ones to deal with.

    Scratch does a really good job at this but its ceiling is too low (for me at least, they really focus on not confusing the user, which is a good choice considering their target audience). That said they are making some great improvements in 2.0 by adding “Build your own blocks” and a “Backpack” to make reusability easier.

    Etoys is also amazing in its power and expressiveness (although it has a number of rough edges which can be frustrating to beginning beginners). Also it suffers from not being able to work in a web page or on an mobile device which is what most people want to do.

    What we need are more environments like these that let folks play with and express ideas (and get things done) in a way that lets them focus more on the ideas and work to be done than on programming, while letting them dig into the code and program when needed.

    Reply
    • 6. Mark Guzdial  |  December 21, 2012 at 8:54 am

      I take up those themes in today’s blog post. We do need to make it easier to code, but I think that the issues of culture and pedagogy are even more significant in terms of making coding accessible than the environments.

      Reply
  • […] I mentioned in a previous blog post the nice summary article that Audrey Watters wrote (linked below) about Learning to Code trends in educational technology in 2012, when I critiqued Jeff Atwood’s position on not learning to code. […]

    Reply
  • 8. Mike Lutz  |  December 21, 2012 at 9:56 am

    Here’s a problem Mark – those one shot coding sessions “to find something out” grow and evolve and become the unmaintainable messes that software engineers loathe. As long as it’s your personal toy, fine – but all too often a colleague becomes interested, it’s released into the wild, and then all hell breaks loose. Folks unrelated to the original coder become dependent on it – for all intents and purposes it’s a product – and now some poor engineer is called in to “fix it.” Want a good example? Just look at the mess PHP libraries have become.

    Neither I nor the students I teach aspire to be code janitors, yet all too often that’s what happens. I believe you do your CS students and the profession a disservice if you don’t at least encourage non-computer types to do a reasonable job of organization, formatting, naming and documentation, and illuminate why this is important in practice.

    Reply
    • 9. Mark Guzdial  |  December 21, 2012 at 10:07 am

      Mike, we can absolutely find examples of where a personal toy grows into unmaintainable monstrosity — but those are very rare. Scaffidi, Shaw, and Myers (2005) showed that end-user programmers outnumber professional software developers 4:1. There’s a lot of code being produced, and almost none of it becomes a “product.” Yes, we can point to the PHP libraries as examples of a mess that grew without software engineering, but I can point to examples just on my campus where an engineering professor’s “toy” became a product explicitly by involving software engineers to redesign the small project into a robust, maintainable, and reliable software system.

      I would like to see some empirical studies of this point. My sense is that almost all one-shot coding sessions last no longer than that one shot. I think that “your pet project will become a huge scary mess” is a bogeyman that we tell non-CS types to keep them out of our field.

      Of course, I encourage “organization, formatting, naming, and documentation” standards from my students — but I don’t require it. It’s unnecessary for most of what they will ever write. I’d rather see them code something than to not code because they can’t code to professional standards.

      Reply
      • 10. Mike Lutz  |  December 21, 2012 at 11:37 am

        The problem is the bogey man is real – I’ve seen him under my bed.

        Many of the systems I’ve worked on in industry were multi-KLOC systems developed by experts from other domains. By the time I and others with a S/W development background arrived on the scene to “fix” the products, things were a mess. I even thought about creating a new discipline, software archeology, devoted to identifying the various cultures that had built these systems piece by piece. Sort of like Schliemann and Troy, but without the clear layer boundaries. Or like Mike Holmes on Holmes Repair.

        And while your “I’d rather have them code something” has appeal, how would you feel about a Chemistry lab whose instructor said “I’d rather have them mix something than not mix things just because they can’t mix to professional standards.” Yeah, that’s snarky, but I think there is a smidgeon of truth in it.

        Can we possibly reach a compromise? You mentioned the engineers who called in the s/w troops to turn a toy into a product. Sounds like a previous head of our IE department who said about statistics “I want our graduates (a) to be able to do the basic analysis and (b) to know when they have to bring in a statistician.”

        Could you try to do the same for your non-computing students w.r.t. software engineers? Then the bogeyman might really be defeated.

        P.S. I have a scary story about a heart surgeon and his spreadsheet in the surgical theater if you need a real bogeyman.

        Reply
        • 11. Mark Guzdial  |  December 21, 2012 at 11:52 am

          Mike, you’re telling me about the cases where a toy system gets too big and becomes an enormous mess. I totally believe you! That happens. I’ve seen it, too.

          What I’m saying is that it doesn’t happen often. By far, most end-user code gets used for one purpose and gets thrown away afterward. And we use the fear of it happening to keep out people who might code little things productively, that could have big results for them.

          Let me offer an analogy. If you eat too much bacon, your arteries will clog, and you will die of a heart attack. Absolutely, that happens. I still like bacon. I still eat bacon. I would never tell anyone that they shouldn’t eat bacon, just because it’s possible to go too far and have a health problem because of bacon.

          I don’t hide software engineering from students. I tell them about using a software engineer to make code robust, reliable, and maintainable. When I teach media computation to non-majors, there are always students who build something too complicated for them to make sense of. I do talk about how they could have structured their code better. In my lectures, I talk about the practices of software engineers, and about using object-oriented programming as a way of managing complexity.

          Software engineering (as a set of practices) and software engineers (as experts that they can turn to) should help them to go further, and not be a barrier that must be breached before doing anything interesting with code.

          Reply
          • 12. rdm  |  December 21, 2012 at 12:43 pm

            If you have real experts you can talk with, it seems to me that a right approach would be to gather requirements, and priorities and rewrite from scratch.

            Reply
            • 13. Mark Guzdial  |  December 21, 2012 at 1:05 pm

              Absolutely — I’m a big fan of throwing the first one away (“burn the disk packs”), use the lessons learned, and start over. In fact, I’ve seen that happen at Georgia Tech in moving from toy->system, with good results.

              Reply
        • 14. Jess A  |  January 16, 2013 at 10:48 am

          Mike Lutz, creating code would be better compared to creating visual art rather than mixing chemicals. It’s not going to explode and endanger anyone if someone creates their own code to fulfill a personal task. Who are you to tell other people how to write code for their own personal use? Do you realize how arrogant and elitist you sound? You’d be better off telling a 5 year old not to use fingerpaints because it’s not up to the standards of professional art.

          Reply
          • 15. Mike Lutz  |  January 16, 2013 at 12:02 pm

            Jess,

            I have no problem with anyone programming for their own purposes. My issue is with those who overgeneralize their ability, and whose code escapes into the wild, where it becomes part of widely used systems.

            And, yes, such overgeneralization can be deadly. Read up on the Therac tragedies, where a lethal combination of software ignorance and amazing engineering arrogance led to abysmally written code than KILLED patients. I’ll admit that this isn’t an example of artists abusing their knowledge, but rather hubris by supposedly professional engineers; still, the analogy holds.

            For more visible examples of this effect, watch any episode of “Holmes on Homes” on HGTV. Electricians are not elitists when they criticize sloppy and dangerous wiring, nor are they out of line when the complain about having to rip out a mess and rewire a house. Similar comments apply to plumbers, carpenters, and, I would argue, to software developers faced with similar messes in software from novices for whom “after all, it’s only software” (a phrase I’ve heard all too often in my career).

            Reply
            • 16. rademi  |  January 16, 2013 at 1:27 pm

              Probably a mistake, though, to conflate “bad programming” with “bad health care”, even in contexts where both are present.

              Reply
              • 17. Mike Lutz  |  January 16, 2013 at 8:37 pm

                I guess I don’t understand how abysmal engineering of a device used in cancer treatments results in conflating bad programming with bad health care. From reading Nancy Leveson’s articles on the Therac, I saw nothing to indicate poor treatment or unprofessional actions on the part of the radiation technicians or radiation oncologists; they used a machine appropriately and the embedded software caused malfunctions which led to massive radiation overdoses.

                Reply
          • 18. Mark Miller  |  January 16, 2013 at 6:04 pm

            It struck me reading what you and Mike said that the real problem is that a lot of people using software cannot discern the difference between something created with finger paints and more developed art. More than likely it’s because they’re not even looking at the artwork. All they’re paying attention to is “What does this software make the computer do,” and, “Does it meet my end goal?” If it satisfies those criteria, they use it. That’s as far as it goes. After all, as the old argument went, “People who drive cars shouldn’t need to understand what goes on under the hood,” right? When we said that, we forgot that the same people who “drive cars” would be the people selecting what software to put into new products. Oops!

            Is it the software writer’s fault if they think they have a special talent they should share with the world, especially if that sense is validated by all the people who use it? Shouldn’t the people who use it have a sense that they should evaluate the software before they use it? I think you have a point that writing code has common cause with freedom of expression. Just because someone writes something doesn’t mean it has to be of good quality. Culturally we should prefer that, but lots of people write junky thoughts with words. Should we ban that because someone might believe or be influenced by those words, which might drive them to do something destructive? Or should we insist that people have some discernment and values in play when they read? Shouldn’t we value badly formed thoughts less than well-formed ones?

            Bringing this back to Jeff Atwood’s remarks, using this analogy, it sounds like he’s saying, “Please don’t become literate,” though I think what he, and others on here, are really saying is, “So many people think that just because they know their alphabet, and can form words with it, that they are literate. Please stop this!” This is a bad analogy, because we don’t have an alphabet in programming yet, but it serves to say that “a little knowledge is dangerous.” What I’m saying is the wrong people are being blamed. It’s not the writers of software who are at fault. It’s the people using it. The argument, “Please don’t learn to code” communicates weakness within the industry. It’s basically saying, “We have no influence over the people who use code, but maybe we can keep the ‘barbarian coders’ at the gate.” This is a sad state of affairs. I think we should be asking the question, “Why can’t users of software tell the difference between bad and good code, and how can computer educators improve that sense of discernment, and what kinds of ideas have value?”

            Reply
            • 19. Mike Lutz  |  January 16, 2013 at 8:24 pm

              First, let me clarify a bit. Most of the problems I’ve dealt with (or read about) are due to technical professionals in other disciplines (engineers, mathematicians, statisticians, etc.); I’ve never had to work on a Big Ball of Mud from the arts or humanities worlds. I’ll also state that when technical professionals take seriously the issues of quality, scale, and evolution they are among the BEST developers I’ve worked with, as they can apply all the disciplined techniques from their “home field” to software.

              But it seems there is a category of expert who believes deep expertise in domain/discipline X combined with a semester of C, Java, Python or FORTRAN means they have the skill to write large software systems concerning X. As I said before, the attitude is “it’s only software.” No electrical engineer would think he/she was capable of designing a suspension bridge, but all too many professionals have no compunction about diving into software development with gay abandon.

              Reply
  • 20. rdm  |  December 21, 2012 at 12:57 pm

    Also… is it only systems designed by non-experts which introduce problems? http://www.johndcook.com/blog/2010/08/24/overly-helpful-software/

    Reply
    • 21. Mark Guzdial  |  December 21, 2012 at 1:08 pm

      What a great question! That’s one for empirical software engineering. Do pet projects that get turned into products lead to more or fewer longterm engineering problems than projects that always had being a product as a goal?

      Reply
      • 22. Mike Lutz  |  December 21, 2012 at 1:17 pm

        You are right – this is an interesting empirical question. Certainly software engineers are not infallible – we make mistakes, gild the lily, etc. The issue is one of relative vs. absolute knowledge and competence.

        The fact that the Tacoma Narrows Bridge collapsed does not mean I’m going to let out my next bridge contract to handymen from Home Depot. Civil engineers have both the tools and the incentive to improve their engineering skills, something a contractor with a pickup lacks.

        Indeed, in Mark’s examples, I don’t want multimedia students to be worrying about such things – they’ve other fish to fry. I do want them to realize when they’ve wandered into engineering land, and to seek out s/w professionals when they do so.

        Reply
  • 23. Mike Lutz  |  December 21, 2012 at 1:06 pm

    Mark,

    We vehemently agree, I think. I love doing small problems to keep my brain in shape – sort of like piano etudes. Right now I’m getting up to speed in Erlang for distributed systems, and I have lots of little pieces of experimental code lying around. But if I decide to expand some into a case study or an example, you can be sure I’ll bring it up to engineering snuff. The key is that I recognize the difference between these experiments – spikes in the agile world – and code I’d release in the wild.

    I guess I’d just ask that you reinforce that they are working on one-off stuff. If it looks like it will grow into something interesting, there are professionals out there who can help them before the entropy gets too high. It’s like the difference between stringing an extension cord for the Christmas ornaments and rewiring your house (let’s eliminate the Clark Griswold effect!).

    P.S. You wrote What I’m saying is that it doesn’t happen often. Replace often by ever and I’d be totally on your side. Since this isn’t the case, I’d ask that you make students aware that there are larger issues out there. Like I said – IE statistics vs. statisticians.

    P.P.S. Do you ever do an exchange-and-extend lab or project, where students exchange their solutions with another person (or team) and then have to extend its functionality? The code doesn’t have to be big, just big for them. And the point is less the extension per se than the affective change in their perceptions – especially how your code can make someone else’s job difficult. As one of our team’s stated after working on a system from a previous class, “this a criminally negligent design.”

    P.P.P.S. If you’re on a paleo-diet you can eat bacon and other meat products to your heart’s delight. Problem is, you can’t eat carbs – no bread, very little sweet fruit, etc. Yucch.

    Reply
  • 24. M. Edward Borasky (@znmeb)  |  December 21, 2012 at 5:19 pm

    Atwood’s rant was pure cybercrud and I condemned it when he posted it. http://borasky-research.net/2012/05/18/please-dont-learn-to-code-cybercrud-at-its-worst/

    Reply
  • 25. middleearthman  |  December 21, 2012 at 11:41 pm

    Good grief. I have followed this conversation (and others recently posted on the blogosphere) and I am amazed that there is such evident shortness of vision.

    Coding is a tool. Period. With coding, you can do all sorts of simple things, as well as intricate things, good or bad.

    Learning to code is a bit like learning about geometry, or wood turning (on a lathe). For an architectural draftsperson, geometry is a useful if not essential tool. But, hey, anybody can learn some geometry. Some people actually get a kick out of using skills they aquired when they learnt about it. Learning about geometry enhances their overall education.

    I learnt a bit about html about a decade ago. Twenty years before that I learnt to write BASIC programming. Ten years before that I learnt about algebra at the same time as I learnt to play guitar and was also given tuition in drawing and painting. If you think there is no relationship between any of these skills then think again.

    However, I didn’t become a web programmer or a mathematics professor. I didn’t take the stage as a great musician. I didn’t fill the galleries with my work as an artist, but all of the different skills that I acquired have enhanced my life in ways that I believe is immeasurable.

    Why not learn a bit of coding? If it only gives you the opportunity, when it arises, to customise or improve existing coding, as I have done, why the heck not learn about coding? After all, it can also be fun.

    Reply
  • 26. Ruben  |  December 27, 2012 at 10:09 am

    Coding is like Mathematics. You must to use it sometime in you life, though you’re not gonna be a professional at it.

    Reply
  • 27. Mark Miller  |  December 27, 2012 at 6:07 pm

    I can somewhat relate to Atwood’s comment, though I think it mainly applies to people who want to be developers, that programmers think it’s their job to create code. He contrasts this with creating solutions. I had this mindset (creating code) for many years, as a budding programmer, though this isn’t the way I thought of it. I had this idea of “always be creating.” I’d ask myself, “What novel idea for a process can I come up with to serve some goal that I or others would find relevant,” whether it was creating a game, or a practical application. It was fun. I learned some new skills and interesting ideas in the process, but I did not learn to code well, which I think is Atwood’s main point. It wasn’t until I majored in CS, and got out into the work world that my focus shifted to coding well, and creating solutions. Even after I shifted focus, I longed to get back to “fooling around” with code. It took me a while to realize that there was much more that could be done with computing than fooling around with code for its own sake, and “creating solutions,” as Atwood, and most developers think of it.

    Maybe the environment that budding programmers come into has changed dramatically from when I was at that stage, but I think what Atwood misses is that many aspiring developers, who became professionals and exhibited little if any of the bad traits he talks about, started as budding programmers who were fascinated by what computer programming could accomplish.

    It seems to me what he’s saying is you need to have a utilitarian goal in mind, and only when you realize that a computer would be a good solution for it should you aspire to become a programmer.

    I like the idea of universal literacy. Programming is like knowing how to read and write in the domain of computing, though the skill still has enough complexity that it’s similar to becoming a scribe in ancient times. It’s not like using the alphabet in the time of the Greeks. I think CS should look inward and wonder what it could do to make the basic elements of programming more like our alphabet. That would make widespread literacy easier, and more socially justified.

    As I read this post, this quote from Alan Kay came to mind: “Should we even teach programming?” His point being, as I recall, that so often for students the basic skill set required for becoming a programmer becomes the raison d’être of the whole exercise, without realizing that there are much higher goals to be had out of this thing we call computing. In a way he and Atwood are complaining about the same thing, though for different reasons. Once one starts to learn programming, there is a big temptation to want to create something the student considers relevant, but the goals for “what is relevant” are not far off from scrounging basic materials to create something that’s not of very good quality. Too often the skill gets applied to, “Look what I just created,” with an eye towards “solutions,” as Atwood would call it, or a neat trick. It’s the creative impulse. The first thought is to put the computer to narrow purposes, not expanding on the concept on which the object they are using is based.

    Perhaps teaching “programming first” is putting the cart before the horse. Maybe teaching about computing, and its relevance to systems should be a primary goal, with programming coming later. Perhaps earlier, more basic conceptual steps (perhaps concepts of thought that are used in math and science) are required as well.

    Reply
    • 28. Mark Guzdial  |  December 28, 2012 at 8:55 am

      I wonder about the dependencies and ordering in your last paragraph, Mark. Sure, knowing about computing and systems may be far more important than learning about programming. Can we learn about computing without a language for describing the notional machine? Is programming useful for describing the systems that we want to teach? Programming is a kind of literacy, and is only an end goal for a few people. Where is that literacy useful for learning? Computing and systems thinking seems pretty high up on that list.

      Reply
      • 29. Mark Miller  |  December 28, 2012 at 7:36 pm

        Hi Mark.

        I was trying to be a bit provocative to challenge some strongly held assumptions in the field. As I wrote it I thought about Seymour Papert’s approach with Logo. I need to read more about this, but my impression is his purpose was to teach kids math, not programming. Programming was merely a means to an end (to model with a computer in “mathland”), not an end in itself.

        A common question I used hear students ask when I was learning algebra in Jr. high school was, “Why are we learning this?” Looking back on it, I can understand the question, because we were being taught aspects of understanding mathematics, but not the really important concepts that would cause people to recognize its significance, that it has implications for how we are able to see, and understand our own ideas.

        I did not mean to suggest that computers should be shunned, or that people should use computers without knowing programming, or a language (preferably many different types). I should add that it is possible to learn some programming without using a computer, and it might even be constructive to do that with students, because it compels them to think a bit about what the notional machine is doing, which is where the real action is, not in the language that they can see.

        No, what I meant to get across is that too often the language, and what you can do with the notional machine under it, are the “main events,” the reason the students are there to learn, and that this approach misses greater goals that could be pursued. It would be better to establish a conceptual foundation for study–a domain of ideas, and then say, “We can use a computer as our lab to study these ideas, and here’s one way (a language, or set of languages) we could do that.” It reverses the role of the computer. Rather than it just being a tool for “creating whatever,” it’s established as a tool for exploring specific domains of ideas. Programming is seen as just a way of conveying a model to a computer. The “main events,” in the context of CS, are notions of computing, and, among some other possibilities, using knowledge of computing to explore notions of systems, information, knowledge, psychology, etc. Computing, as a body of ideas, can be seen as a way of understanding phenomena in a deep way.

        An idea that’s been returning to me again and again (I put this forward only as an example) is that processes that were invented (or developed) for what we can see as a “purpose” are transferrable to an abstraction, which can have significance totally outside of where it developed originally. This, to me, is one of the “grand ideas” of computer science.

        Reply
  • 30. Is coding for everyone? « Gas station without pumps  |  January 1, 2013 at 10:42 am

    […] Definitions of “Code” and “Programmer”: Response to “Please Don’t Learn to Code”, Mark Guzdial points […]

    Reply
  • 31. A good year | run( ) {  |  February 13, 2013 at 7:36 pm

    […] For one of her top ed-tech trends for 2012,Audrey Watters noted that learning to code became a bit of a media and Silicon Valley favorite. It started with Codecademy’s genius marketing plan of the Code Year. But the most interesting part of the post was her link to Jeff Atwood’s blog on “Please Don’t Learn to Code.” In a gist, Atwood argued that one should not learn to code simply because there is a perception that learning to code is automatically equated with solving problems or a big paycheck. There are parts of his argument that I agree with; for e.g., that Michael Bloomberg would be better off running his mayoral duties than learning about variables and functions (incidentally, there is great irony to this example…which I will get to below). Then there are parts of it that I don’t– which Mark Guzdial put in much more eloquent terms than I ever could: […]

    Reply
  • […] or “Advanced how to cheat other gamers.”  I wonder if this is another case of “Don’t learn to code — leave it to the experts.”  Is it really threatening to IT firms that more teenagers are learning to […]

    Reply
  • 33. Programming for everyone! | Ambericity  |  February 22, 2013 at 12:34 pm

    […] than figuring out the best actual solution), etc. Over the next days and months lots of people responded to […]

    Reply
  • […] producing.  I do not see MOOCs addressing my interests in high school teachers learning CS, or in end-users who are learning programming to use in their work, or in making CS more diverse. It may be that universities will be replaced by online learning, but […]

    Reply

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 11.4K other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,095,465 hits
December 2012
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930
31  

CS Teaching Tips