Darkspore!

I’ve played Diablo II and never got into it… Eventually sold my copy for ten bucks to a guy who’d been banned on his previous CD key. I played Titan’s Quest for maybe an hour, and I don’t even know where that went. Never tried Torchlight or Deathspank or any of the other Diablo-esque games that have come out (actually I tried Hellgate: London a little bit), because they just never hooked me on the “collect rare lootz” premise.

        Spore was… ok. I played it long enough to get to the space part, and promptly wanted to leave. Controlling the creatures themselves was really the best stage. So as a game it didn’t really get me. The animation and random creature generation, even the creature editor itself, that stuff was all great. But it wasn’t a whole lot of fun, when you include all the various stages.

        Darkspore, first of all, is an inevitability considering they probably didn’t make any money off of Spore. They’ve got amazing procedural generation (clearly a hot topic recently) for both the creation of random creatures and animating them, so why not make use of it? I can pretty much guarantee that 99% of the game’s assets were made by a guy pressing the “generate horrific monstrosities” button and then picking the five coolest ones out of the thousand possibilities that came up. From a business point of view, they’ve hit the jackpot, because the Spore creature editor will probably always be cool.

        As a game, though, it’s actually pretty sweet. You’re a Crogenitor (pffffft), and what that really means is you’re one of the demi-gods blessed with the ability to genetically modify powerful mutants. The game doesn’t say this, but you know that it’s true. In your role as “the last surviving god-thing from Spore,” you control a squad of Genetic Heroes (read: deadly mutant creatures) and you upgrade them using the various shiny pieces of equipment you find while running around murdering things.

        The heroes have subtypes (Necro, Plasma, Bio, Quantum, stuff like that) which decide their powers (summon ghosts, stun guys, create minions, slow down time) and then classes (Ravager, Sentinel, etc.) which determine your main stats and attack type (strength and melee attacks, dexterity and melee attacks, mind and long range, etc.). Then they have some detailed backstory stuff I didn’t read, which may or may not have been randomly generated too. The equipment you find is equipped to them in the same way body parts were in Spore, and I think you can find new body parts as well eventually. And of course the equipment has all sorts of stat boosts and cool adjectives like “Laseth’s Thunder Claws of Sharpness” and there’s a few levels of rarity. You can gamble your equipment by beating progressively harder levels one after another - double or nothing, essentially. Beating multiple levels in a row also increases the odds of getting rare (rarer?) items.

        Collecting some ugly little staff that claims to be a mystical bone wand, but really looks like every other staff, isn’t very engaging. Collecting energy claws, mystical hoods, cybernetic implants, crystal growths, etc. etc. and “equipping” them through the creature editor, however, is a whole lot more awesome. Maybe it’s that, maybe it’s that they took the good parts of Spore and the good parts of Diablo, but I actually liked playing Darkspore. That’s pretty good, since I didn’t like either of those games. I don’t know if I’ll buy it, but the beta’s free until friday, so go check it out. We can team up and stuff and maybe that would be cool?

        (some dumb people in the game’s chat were like ‘wait is this game free to play’ and 'screw it if they want a monthly subscription’ because they don’t realize it’s a normal retail game lol)

        The game has a campaign mode, with extra unlockable difficulty levels (and, I assume, better lootz) with co-op that increases the shiny stuff you and your team members get. It also has PvP (unlockable through a purchase at like level 9 or something) and I don’t know how that works. I have no idea if the full game is going to have more in it? I assume the campaign is going to the only single-player mode, which is ok I guess but it’s not like it has much of a story or anything. You’ll be hard-pressed to remember what the cutscenes tell you after five minutes of murder.

        Anyway! As I said, free until friday, we can team up and stuff if our schedules work out. In fact, I’m free tonight… but it’ll take like two hours to download. I’m ok with this multiplayer because it’s co-op and everyone wins.

QUICK HITS

I’ve got a bunch of things I want to say, but not a whole lot to say about them. Not enough to make individual posts, but too much to say in a facebook status update. So I’m shamelessly stealing the term ‘quick hits’ from The Electric Hydra podcast (or internet radio show) and presenting a bunch of stuff to you, with shiny bullet points for your reading pleasure.

  • I want to make a game about peace keeping space marines. Non-lethal weapons only. You have to save people from violence and help them and build intergalactic wells or something. Why not?
  • Procedural content is probably the way of the future, and there’s a great quote in there about games like modern Call of Duty being “the pinnacle of effort-based development.” What that says to me is this: there is also a limit to what we can produce through sheer effort, simply because anything beyond that is too expensive, or takes too much effort, or just isn’t viable for whatever particular reason. No one is going to make a game with a $100 MILLION dollar budget, unless they’re absolutely guaranteed to make money on it, which means selling an absolutely astounding amount of copies. Procedural content generation means a whole lot of different things, but primarily, it means making a game like Assassin’s Creed II for the cost of whatever tools you use to generate the cities and assassination targets.
  • Slightly related to the above point, and mentioned in the second article (on the word content - yes there’s a slight space in the hyperlinks, you should notice this stuff, but I shouldn’t do it in the first place) is user generated content, sometimes called “procedural gameplay.” Stuff like Far Cry 2 perma-death runs, or the stories people make out of MineCraft, in which awesome play experiences are had by the players simply by making use of the systems a game makes available. This is also cheap, but slightly different from generation of procedural content.
  • Ubisoft is hiring someone to help write an encyclopedia for Assassin’s Creed, and I REALLY SERIOUSLY WANT TO APPLY but I’m nervous, afraid of using resources like the Assassin’s Creed wikia and wikipedia itself, etc. etc. Being an adult is haaaaaaaaaaaaaaard.
  • There exists a NES rip-off of Final Fantasy VII.
  • A NES version, with the complete story and most of the features (Vincent and Yuffie missing, for example) of the PS1 game Final Fantasy VII.
  • FF VII for the NES was originally available only in Chinese. So a bunch of Chinese programmers converted FF VII into Assembly and 8-bit sprites.
  • A translation for this game exists. I’m too lazy to dig through the internet and find a ROM of it, but you can play it through this forum and if you REALLY WANT you can post a bunch on the forum and download it.
  • The game probably sucks and playing it on that forum’s arcade thing is probably terrible, but you can play FF VII as a NES game.
  • Bad games exist, and they probably shouldn’t, but some people just want to make money, not good games
  • Why should your game exist?
  • A Carleton graduate posted an article on AltDevBlogADay. I kinda want to say something, but I really can’t think of any good reason for doing that except that he went to the same university I’m attending. Along with like 20,000 other people. I’m definitely mentioning it to Jim Davies at the last meeting with his lab on monday though.
  • I actually really like reading reasonably short post-mortems, and here’s an interesting one.

The moral of the story is you should probably read AltDevBlogADay. I love the idea of “GDC all year round” because there are a ton of awesome people with great stuff to say who wouldn’t get a spot to speak at a conference because they aren’t famous. And they sure as hell wouldn’t be given an hour to talk about best comment coding practices or the glory of “scripting languages” that don’t need to be compiled for ten minutes in order to test minute changes.

You win some, you lose some

It’s a good thing I’m never going to be making charts in Java, because I would probably be screwed if someone thought Excel wasn’t good enough and asked me to write a new chart program. We’ve had two tutorials now on working with these chart program, and in both of these tutorials I’ve spent an hour and a half trying to get past the first instruction. It certainly doesn’t help that every step in the tutorial is meant to somehow summarize half an hour of work - there are only four or five steps in each. Why couldn’t I just spend ten minutes accomplishing the same work with a little more instructions?

        It’s not like the notes are any help, either. Here’s the entirety of our notes in the “Graphics” chapter. Think you could make a graphing application out of that? So we have bad notes that probably teach less than the Java documentation does (educate yourselves hurr hurr and we’ll just take your money), and we have unclear instructions that are written with the understanding that you’ll waste your time doing it wrong for a while until you figure out the problem through trial and error. What makes it worse is that every tutorial begins with you downloading at least five pre-built classes and then trying to fill in missing functionality. Last week’s tutorial had… 14 classes to download. Most of these have variables like X_AXIS_OFFSET and YEAR_SEPARATION_WIDTH and they’re used in a few places without giving you any real explanation of how it’s meant to work.

        Last week’s tutorial was to draw the chart, and I couldn’t get that working. This week was to do some mouse stuff. Anyway, the chart drawing code was supplied this week, and here’s the formula for the (x,y) of the points on a graph that I spent an hour and a half trying to figure out:

X: ORIGIN_OFFSET_X + i*YEAR_SEPARATION_WIDTH


Y: Y_AXIS_OFFSET + DATA_HEIGHT - DATA_HEIGHT * histogram[i] / maxValue

        The code I had, which was drawing everything slightly off for no reason I ever figured out:

X: (i*YEAR_SEPARATION_WIDTH)+ORIGIN_OFFSET_X

Y: Y_AXIS_OFFSET+DATA_HEIGHT-(aDVDCollection.yearHistogram()[i])

        Why does that multiplication happen? What’s wrong with my code? I could have asked the TA to write it for me, but that really doesn’t help anything. At best I finish the tutorial without learning anything. Or, wait, what was I supposed to learn? How to use the drawLine() method? Or was I supposed to learn how to make a mediocre program after inheriting someone else’s code? It’s the destiny of your average software engineer, after all. Really, though, it’s not really a useful learning experience.

        So what I’m getting at with this boring little rant is this: last semester, I learned stuff in our tutorials and by reading the notes. They were written by the same guy. Clearly he just gave up trying to teach the second course, and figured we’d learn what we need to know through hours spent working on assignments. My instructor (a different guy) isn’t really teaching us a whole lot during class time, either. I mean, there’s some stuff he tells us that’s great, but he’s a professional programmer, not a teacher. If you’ve ever done any programming, you’ll know what I mean by this. You can probably figure out that watching someone do that for three hours every week would be boring as hell.

        So as for the title of the post, last semester was definitely a win, and this semester was pretty much a loss. Doing the assignments and looking at example code from class assignments and the notes has taught me plenty, but did I really need to pay six hundred bucks for it? Technically yes, because I need the course to graduate, but realistically, no. Sometimes you just don’t get what you paid for with post-secondary education. That’s the unfortunate truth.

        All in all, this just reinforces my decision not to specialize in computer science as part of my cognitive science degree. I may as well just teach myself everything I need to know with a tutorial and the official documentation. Maybe I should steal a few of the assignments for the other courses before they’re taken off the Carleton sites to give myself something to work towards…

So I’m in a philosophy class this semester called “Mind, World, and Knowledge” and we’re getting into the “mind” part of the course. At the moment, we’re looking at the debate between Dualism (the mind and body are separate things) and Materialism (the mind is a material thing) and while there are a ton of different positions within these broad categories, that’s the basic gist of it. Thus far in the course, everything we’ve considered has seemed pretty good on its own, at least until we get into criticisms coming from other philosophers. So I’ve been open-minded and accepted that they may have a point with what they’re trying to say.

        Getting into Dualism, it’s been really hard to do that. And that bothers me a lot, oddly enough. What am I learning if I cheer for every little argument against what we’re considering and can’t think of anything but problems with the position? That doesn’t mean it’s wrong, just that I’m looking for it to be wrong, or want it to be wrong, because of my own personal bias. Dualism isn’t literally about some kind of soul or mystical property or another of our minds, but it’s hard to escape the influence when you’re trying to separate the mind as a thing from our brain and our bodies. I’m sure that contributes to my bias against it - in a kind of abstract way, I would certainly say we have a soul or something along those lines, but I’d see that as something that arises out of what we are. As in, you have a soul because you think and hold ideas about things and all of that comprises the “soul” of who you are. I guess I’m saying it’s equivalent to your identity.

        At any rate, that’s a Materialist position (or maybe Idealist…? that’s something about ideas making us who we are, or something) and pretty clearly shows my bias against what we’re learning. It just seems wrong and ignorant to look at this as somehow better than Dualism just because it’s the thing I happen to think is right. It’s a totally natural way to be, and just about everyone is that way. But does that make it right? If everyone is wrong, that doesn’t necessarily make it any better.

        I’m not bothered by the idea that I could be wrong, really - if some form of Dualism turned out to be right, well, that’s just great. Today’s topic, specifically, was a philosopher trying to show that sensations are proof of Dualism. A few of the examples he used were afterimages and pain. Now, I can tell you that there are specific types of nerve fibres for experiencing pain, and if you get distracted you can actually not perceive the sensation of pain your nerves are actually bringing to your brain. I can also tell you that, because of a certain way your eyes work and the neurons for sight work, afterimages are caused when they fire in reverse after a stimulus is taken away. So you stare at the sun and close your eyes, and you’ll keep seeing some colours. This is a purely physical, or material, thing - you’ve got neurons firing and they’re creating this sensation you’re having.

        Dualism is the idea that the mind is somehow separate from the body - so this philosopher was saying that the perception of things such as sounds and etc (sensations) is something that belongs to the immaterial mind, and aren’t equivalent to the causes of the perception - i.e. the sensations. I’m using psychology terminology, but he described it a little bit differently and this is more precise. Anyway, the whole defence against such a “this is what makes you perceive something” argument is this: the cause of your perception (the sensations, neurons firing, etc.) aren’t equivalent to the perception itself. So, the fact that you stubbed your toe and nerve fibres are bringing that message to your brain, doesn’t equal the fact that you, as an immaterial mind thing, realize “ow my toe”.

        That’s all well and good, but that’s why the term perception exists separately from the term sensation. They’re different things. You could have perfect vision, and see everything, and yet think you’re blind, because you aren’t getting the perception part. You might even look at someone while speaking to them, and your neurons are reacting, but you’d still say you don’t see them. But I don’t see that as proof that the mind is somehow different from the brain and the neurons in it and the difference cortices and etc.

        The fact that you can’t see the mind doesn’t mean it’s not there - if I can track every single little thing leading to you feeling pain, why is it somehow separate and special? That’s what gets to me about this particular argument, anyway, but I let it slide when the professor gave it to me as an answer because I don’t want to get the class off track. Things to do and discuss and whatnot. But, really, it’s not that I think the argument is wrong that bothers me. It’s the fact that I can’t bring myself to consider it seriously because I think I’m so smart and have all the real answers already. Well, not consciously. But on some level that’s obviously going on.

        You don’t get a good wrap-up of this post because I have to go to class. Everything I wanted to say is here, so there! Good for you if you wanted to read it. If not, I probably went on for way too long. Ah well.

Insomniac Games creates "social games" division4

First of all, I think I’ve already mentioned here that I adore Insomniac. They’re just cool dudes. But second of all, this is actually a good, concise critique of the majority of facebook games (and whatever mobile games there are that take after them). I don’t really have much to say that isn’t already mentioned there, so go check it out.

On the subject of how problems and goals work in game design, here’s an Extra Credits episode more relevant to “core games” on choice and conflict. It’s not a new episode, but I thought of it right away as I was reading the Insomniac post, and I don’t think anyone has actually started watching Extra Credits on my recommendation yet. So I’ll keep linking to good episodes and that’ll be enough.

Because I worried (or hoped) that someone might worry about me and not get the joke, no, I don’t actually think I have a ton of mental illnesses. Sorry guys.

        If you still think self-diagnosing is a good idea, check out my notes for today ‘cause my psychology professor spent a good ten minutes telling people not to do it. I didn’t write much down because that would be useless, but there you go - diagnosis is complex and you don’t know anything about doing it, so don’t even try. She didn’t mention it, but I wouldn’t go into a meeting with a psychiatrist/therapist/whatever and say “so I think I have x, y, and/or z” because they’ll look for symptoms of that and probably jump to conclusions just like you did. “You’ll see what you expect to see,” after all. Unless you want to be diagnosed and get meds because you’re sure they’ll help/don’t care if they don’t help, in which case, feel free to continue.

        As for the actual reason I’m posting today, I wanted to mention again how much I like WriteMonkey. I like it as much as I liked Q10 when I was working on essays last semester, except a bit more, because it fixes problems Q10 had and adds a number of useful features. For example, something I’ve gotten used to using as a project-specific todo list is comments in my programming assignments - and WriteMonkey allows you to keep track of comments and dims them out a bit to set them off from the rest of the text. It lets you set progress goals (I think for the entire project) and then track partial progress, as in how much you’ve done in one session and stuff. It can do lookups for stuff on google and dictionary.com and stuff. And I haven’t used it yet, but you can create tags it calls “jumps” and automatically find them in the text - so you could have like @[INSERT IDEA] or #[INSERT TODO] or whatever. It also tracks your most frequently used words! So that’s pretty cool, and you can see “hmm, I used the word "something” twelve times, that’s not good".

        Also, Ninite has made it possible for people to embed installers in their sites and show only certain programs, which is pretty neat because you can say “here are the programs I recommend using” and people can take whatever they want and just install them without any crap.

Recent UniNotes4

So we’ve been covering some cool stuffs in my psychology class recently. You’ll find my notes on motivation (why I became a hermit to do well in school), personality development (raise your kids not to be dumb), and stress (spoiler: it’ll kill you). Next couple of weeks will be mental illnesses - your depressions, schizophrenias, etc. Everyone’s favourite topic, really. On that note, I’ve self-diagnosed myself with every anxiety disorder except Post-Traumatic Stress Disorder (though I’m only lightly OCD) and just about every personality disorder. Hopefully they can medicate those away for me.

        My other class this semester is phonetics, which will be boring to everyone who isn’t a linguist, aka just about everyone. And if you are a linguist you already know phonetics so you probably don’t care. So that sucks I guess.

        But my other other class this semester is philosophy! And that is pretty cool and you’ll probably find interesting stuff in the notes for that. The topic is “Mind, World, and Knowledge” and so far we’ve done Knowledge and we’re about to move into Mind. I can’t really organize those in any meaningful way for you, unfortunately, but check ‘em out anyway. Some of it is boring, some of it isn’t.

        The best thing from PHIL 1301B, however, is definitely Pyrrhonian scepticism. There are three basic parts to Pyrrhonian scepticism: making no assertions, the method of opposition, and the four modes of acquiescence.

  • Making no assertions means that the Pyrrhonist doesn’t claim to know anything - they simply describe things the way they appear
  • The method of opposition is comparing opposing ideas - different religions, or superstitions perhaps, or even political ideas - and because they tend to be equally convincing, judgement is impossible and peace of mind is achieved - this is my new way of looking at my religious beliefs, because it sounds better than “apathetic”
  • The four modes of acquiescence just mean the Pyrrhonian sceptic accepts the laws and customs of their culture, their feelings and biological needs, their instincts, and the expertise of others in order to live their lives

        Check out the last three (or four if you read this on monday) days of notes for more on this. The only really valid criticism we’ve covered would have to be that knowledge of skills - something the Pyrrhonist accepts as “know-how” because it doesn’t involving making assertions - at some point has to involve a bit of “know-that”, or regular knowledge which eventually becomes equivalent to assertions. So being a doctor doesn’t just mean knowing how to treat your patients (know-how) but there’s also knowing facts and things to help you treat your patients (know-that).

        I’m writing a paper about scepticism and how, as far as I can tell, Pyrrhonian scepticism is a perfectly tenable (fancy word for “it works”) position. So COME TEAR IT APART so that I can include that criticism in my paper! However please do check out the notes in case I’ve screwed up somewhere.

edit: also I’ve replaced Q10 with WriteMonkey for writing in peace

edit 2: if you google certain things, my notes show up as results - awesome, except when I’m looking for answers and find my own notes