Thursday, February 16, 2012

My first home

I bought a condo in Kirkland, woohoo! I got my keys earlier today. The whole process took about a month. Too lazy to upload many pictures right now, so here's one that was auto-uploaded to Google+:

Maybe I'll post pictures of my empty place later, or maybe I'll wait until I have stuff in there. Or both. Two bedrooms, two bathrooms, 1300 square feet and a good view. I expect a more scenic view in better weather with the Olympics being visible.

Sunday, January 01, 2012

2011 in recap

It's been precisely a year since my last post, so I guess it's time to make another one! I really wish there was a blogging platform with Google+ sharing controls. Wordpress has a great post editor but only supports public/private. Blogger has neither, and it still doesn't support paragraph tags. Google+ posts are too simple formatting-wise and aren't easily searchable. It's 2012, where's my perfect blogging software?

Time really does fly as you get older. At the beginning of 2011, I was still in school, living the life of an unhealthy grad student. May rolled around and I graduated. I slacked off for a few months and road tripped across the US to Seattle. It was surprisingly more fun than I imagined a road trip would be, but I'm not going to blog about it here. There are pictures on Facebook and Google+, for those of you that can see them.

I started work at Google on August 1st. Easy to remember, but I'll probably forget anyway. Life has gone by pretty quickly since then; I've almost been working for half a year! I definitely miss the copious amounts of free time and nap time that I had before, but I guess working makes me use my time more efficiently.

I definitely feel happier working. I can't quite explain why, but it certainly helps that Google treats its employees very well. I think taking better care of myself has put me in a better mental and physical state. Some parts of grad school were quite stressful at times, as my last post mentioned.

Unfortunately, my life just isn't that exciting where I have so much I want to say about it. I'm content with not being super exciting. What am I looking forward to in 2012? Hmm, hard to say. Releasing my first big change should be "interesting" to say the least. Since I'm approaching six months of work, I should take interview training so I can visit Purdue some time. Oh, and I guess that minor detail of potentially buying a condo. More on that later...maybe.

Oh, and have a new template. Wee.

Saturday, January 01, 2011

2011: A new year, a new post

New years aren't actually that exciting to me. In fact, I may or may not have missed 12am, due to playing video games. This New Year's Day was going to be more interesting, since it was the official deadline for me to decide on whether or not to accept my offer from Google – the one I apparently didn't write about. I ended up deciding several weeks ago to accept the offer; most people who will read this probably already saw my Facebook status from the day I mailed my offer letter back. The decision didn't end up being easy (hurray life-altering choices), but I ended up making it based on the fact that I didn't progress to a point where I would be comfortable spending the next few years working on a thesis. Stopping at a Master's now makes the most sense, since I'll finish my requirements in May. At any rate, a new phase of life begins this year. I guess that phase is called "growing up." It should be...interesting.

In other news, my family went on a cruise for winter vacation this year. The respite from cold and snow was great and I think everyone had a good time. In hindsight, I think I really needed a break from the Internet and work. Despite taking a leisurely one class, toward the end of the semester, I think stress and frustration with various things/people were starting to take its toll on my well-being. The best ship entertainment was easily Second City. I didn't see them when they came to Purdue and now I regret it. Oh What a Night! was also a great act – a tribute to the music of The Four Seasons, interleaved with comedic banter. Eating cheesecake every night was also good entertainment. I made a Google Map to show where we went and put some details on the markers. Unfortunately, I didn't use my camera at all, so I have no pictures.


View Christmas 2010 Vacation Cruise in a larger map

Friday, August 27, 2010

State of Computer Science at Purdue...address

My post entitled "Apparently computer science at Purdue sucks" is by far the most popular on this blog. Most of the views seem to come from searching for some combination of "Purdue", "computer science", and "sucks". There's a boatload of negativity in the comments and the post itself is over 3 years old, so I thought I'd give a refresh.

Disclaimer: I am no longer an undergrad at Purdue. I'm a grad student now and have experienced the other side of some of the things mentioned in the comments. However, this also means I'm less in touch with what other people are saying about the undergrad CS program. Most of these opinions are my own.

A few years ago, Professor Mathur took the helm as head of the CS department at Purdue. Let me just say to the naysayers that I believe he has taken our department in a very positive direction and that I would have counted myself very lucky to have some of the things that have been introduced under his watch. To name a few:

  1. Software engineering specialization. This is one of the first things implemented. Having any specialization at all is a giant step forward. This is/was a bit of a mixed blessing because my experience in CS307 (Software Engineering) was not that enjoyable. Nevertheless, the problem is orthogonal to having specializations.
  2. Five year BS/MS. I definitely would have stayed for this if it had been implemented before I graduated. Unlike many people, I like class. Having an extra year and getting another degree out of it has no downsides to me.
  3. Track specializations. This is more or less the next step after the software engineering specialization, from what I can see. The first two years are still the standard core courses (CS180, CS182, CS240, CS250, CS251) plus CS252, dubbed "Systems Programming". From there, it branches out into seven possible tracks. This is much more flexible than before, since people can now get by without taking the "dreaded" compilers. To me, this is unfortunate (because I specialize in programming languages), but I can easily understand the reasoning. This also means that, for example, people who are interested in math-intensive CS can do the Foundations of CS track.
  4. Conversion of CS177 to Python. This wouldn't have affected me, but this change is huge. They seem to be using Mark Guzdial's media computation book. I definitely think this is a step in the right direction compared to previous iterations of the course (Java, and HTML/Javascript). I would love to TA this class so I wouldn't have to worry about people having horrible indentation.

Of course, we're far from a perfect CS department. If we were, people would stop commenting on my blog and I wouldn't have to write this post. Here are some ideas:

  1. Reintroduce CS381 as a core course. I see some weaknesses with the introduction of tracks in that you can get by without taking CS381 (algorithms). I can understand removing compilers from the core set of classes, but unless CS251 (data structures) is doing substantially more work, I believe this is a big mistake. Sure, it's an elective in these instances, but I don't consider that to be sufficient. Making this class required sacrifices a lot of flexibility, but I find it odd that anybody would be able to get a degree in computer science without an algorithms class.
  2. Introduce an entrepreneurial track. This may not be reasonable/feasible, but I just thought of it. Perhaps it should be a specialization that goes alongside an existing track, but I think it would be a good idea to foster the minds of business-oriented students. In particular, I think it should really emphasize the blend of CS and business, from web startups to...not web startups. I don't think this quite fits into a management minor, though there may be a bit of overlap.
  3. Implement course outcomes. I now feel that course standardization is a necessary evil. Engineering has outcomes, which guarantees that any student that passes the class has at least a basic understanding the concepts taught in the class. This would give a more uniform education to all CS students, since professors would need to adhere to the designed curriculum, which therefore guarantees that any CS alum has at least a basic understanding of all of the concepts in all of his/her classes.
  4. Kill off Solaris labs. Someone I had numerous disagreements with did make a suggestion that I agree with wholeheartedly, and that is that the G040 Solaris lab sucks and should go away. Dealing with the Solaris terminals has very little benefit to anyone, but it seems they still use it consistently in some classes. Personally, I think the Solaris lab in Lawson that has "real" computers should also be converted to a Linux lab. The Solaris lab has always been a source of idiotic quirks that any CS180 UTA has probably dealt with.
  5. Update hardware. On a similar note, but not completely related to undergrads, I think we're in dire need of a hardware refresh. The machines we get in TA offices still date back to Pentium 4s. When comparing to what kinds of equipment other universities get, it's just embarrassing. Also, our network shares are tiny, which results in frequent e-mails complaining about partitions filling up. I don't understand this at all, in the day and age that terabyte hard drives can be had for under $100 and quad core computers can be had for less than $400.
  6. Increase flexibility in the new core. More flexibility in the new core would be a big step forward to people who come in knowing how to program. I won't say I learned absolutely nothing in CS180/CS240, but the amount that I learned was definitely not worth two semesters of work. I had been program MUDs for a substantial amount of time, so I was familiar with C already. I didn't know Java, but I knew C++, so CS180 was not that exciting. Ironically, Professors Dunsmore (CS180) and Brylow (CS240) were among the best I had, but I would be willing to forgo the experience of their instruction in order to take more upper-level classes. I understand that many "experienced" incoming freshmen are overconfident, but an adequate test-out procedure should be enough to filter them out.

Lastly, here are things I will argue vigorously against:

  1. Keeping course curricula up to date with latest fads. I argued against this in the comments section and will repeat it here. Computer Science to me is not about learning the latest technology (in this case, F#), but learning things that enable you to easily understand the latest technology. Trying to keep up with the latest fads not only pushes lots of pressure to come up with new syllabi, but also has questionable benefit. Putting F# in a curriculum effectively binds people to Windows or forces them to figure out Mono, while OCaml works on multiple platforms.
  2. Teaching technologies rather than foundations. The other example in the comments was about learning Silverlight. I risk offending people with this part, but oh well. Majors like C&IT are where you go to learn things like this. They are utterly focused on technologies rather than foundations. To put it one way, a C&IT degree is a fish and a CS degree is an instruction manual on how to fish. To put it very bluntly, certain parts of C&IT are what motivated CS majors learn in their spare time. Of course, to some degree this is necessary (when teaching programming).
  3. Pipe dreaming about infinitely flexible courses. I'm sorry to say that this is thoroughly unrealistic and puts much more strain on already-stressed TAs on grading objectively. Being able to pick your own projects for classes sounds nice, but I haven't seen this implemented effectively in any core classes.

There's one big factor in the equation that I haven't (and won't) address and that's people. How can we make students motivated to learn and faculty motivated to teach? Who knows? Not me. Nevertheless, I think the undergrad CS program has progressed in a positive way. I'm interested in hearing what undergrads that are able to experience the changes have to say.

Sunday, August 15, 2010

A fork in the road

I think historically I've posted at the beginning of each summer with goals/plans, etc. Oops. I don't think I had a whole lot to say at the beginning of the summer anyway. Last year was pretty uneventful aside from OOPSLA, which I posted about. I don't think I'll be returning this year, unfortunately. There were only minor details like, oh, I got a summer internship at Google.

Okay, not so minor. I didn't have much to say on the topic, though, since I didn't know exactly what I'd be working on. I'm on the last week of my internship now and I still can't say what I'm working on. I'll just say that I can't be (much?) happier than the project assignment I have. If you combine this with the blog title, I'm sure you can tell the direction in which this is heading.

I'm nearing what appears to be a fork in the road that is my life. It "appears" to be a fork because, in reality, neither road is actually completely open. The first path is to stay in grad school until I get my PhD; the second is to leave grad school with an MS and go to work at Google. In order for me to stay in grad school, I have to pass my qualifier. The second part has proven more difficult than anything I've experienced in school so far. Obviously, to work at Google, I would need to get a job offer.

Staying in grad school has some benefits. Many PhDs work at Google, so the industry option is never eliminated; many compilers/PL people have PhDs, which is still my field of interest; a PhD is basically essentially for teaching. Of course, it's not without cons, either. Actually, I just have one big, big con right now and that's research. I was always unsure of how well I would adapt to a research environment. The answer so far has been, "not very well." The reasons range from finding many papers to be boring to not enjoying my research work. For the latter, I have nobody to blame but myself. My advisor is always open to other ideas. I just don't have any. This problem will only compound if I stay in grad school, since I'll have to produce a thesis.

As for Google, let's just say that if working there would be like my internship has been all the time, then it would be great. I don't think I really need to evangelize working at Google. As for leaving grad school for Google, there are definite cons. Starting a job will be the next big shift in lifestyle. In school, my schedule is very relaxed. Working pretty much blocks off most of my day every weekday, which I'm definitely not used to. Even if work is really fun, this has proved to be taxing. In addition, leaving grad school basically eliminates the option of ever getting a PhD. This is mainly problematic since most of the PL people I see at Google have PhDs. Most of these PhDs seem to get their expertise during school. I feel like I have gained no such knowledge so far and that learning this stuff while in school is optimal. I also just hate having options cut off from me, but I can get over that. Eventually.

So where does that leave me now? If I'm "lucky" then the decision will be made for me. Otherwise I have lots of introspection to do. I think if I can successfully steer my research career in a direction that I enjoy, I may stay in grad school. I'm not optimistic about this, but there's no way I'll feel good about the decision if I don't try.

Thursday, October 29, 2009

OOPSLA 2009, Day 5: Thursday

Today was nearly all research talks for me. I skipped the keynote by the director of engineering at Facebook, which I guess turned out to be a bad idea, since most people seemed to think it was interesting. The first session was on static analysis and types. I finally got to see the presentation on Doop, after reading the paper on it awhile ago. Doop is a pointer analysis tool written in Datalog, which is a subset of Prolog. Since I was interested in pointer analysis for SCJ annotation verification, the Doop paper was one of the few that I read, since it particularly targets Java.

I opted out of the second morning session and spent the time between the Thorn demo and the student volunteer room instead. The Thorn demo was amusingly, heavily dominated by Purdue representatives (there was a demo yesterday that I suppose satisfied most people). It was nice to see example code. Thorn seems to have a ridiculously large amount of operators at this point, and I made remark about comparing its operator count to Perl's.

The first afternoon session was on memory. The first two presentations were rather difficult to understand, partly due to language barrier. The last paper was on object graph versioning, which is the paper that I'll be presenting at the Purdue PL seminar. The system is basically object version control. It sounds like they store each changed variable in a list when a snapshot is requested, which means constant time for most operations, but logarithmic time for snapshot retrieval. David Ungar said implementing in this into a debugger would be of great use, which reminded me of historical debugging, which will ship in Visual Studio 2010. I think having this for other languages would be great, but not at all relevant to me if implemented in Smalltalk, as they did for the initial system. A Java implementation would be ideal, but I don't actually ever use JDB (I use Eclipse for most of my work, so I use whatever Eclipse uses — don't think it's JDB).

The last research session was on language implementation. I didn't find any of the three terribly exciting, but it was cool to see Bjarne Stroustrup do a presentation. The third presentation on error recovery in parsing was semi-interesting, but I'm not familiar with SGLR parsers.

There was an ice cream social at the end to celebrate the upcoming change from OOPSLA to SPLASH. Or at least, the overall title will become SPLASH, which is Systems, Programming, Languages, and Applications: Software for Humanity. What a crazy title. Apparently OOPSLA has shifted away quite a bit from the OO, which is why the name changed. The research track remains as OOPSLA, but I guess to satisfy the other events hosted under OOPSLA, they changed the name.

It's been a fun week, albeit ridiculously tiring. I'm definitely ready to leave so I can sleep more, but I suspect things will get somewhat mundane again. After all, corn fields can't compete with Mickey Mouse. I didn't really meet anyone new, which is unfortunate, since OOPSLA is a great opportunity for that. The large Purdue presence ended up clustered together most of the time, making the most ridiculous comments about everything, as well as laughing at anything and everything, so I guess people may have thought of us as a clique. I'm also not very good at picking people's brains on their research topics, so I find it hard to approach anyone. I should probably read more papers to expand my horizon a bit more, since most of my computer knowledge seems to be industrial (and not related to programming languages), rather than academic.

Wednesday, October 28, 2009

OOPSLA 2009, Day 4: Wednesday

Today's session began with a keynote from Jeannette Wing. It was a kind of pep talk to encourage collaboration with other fields, as well as high-risk, high-yield research topics. It was moderately interesting to hear things about NSF, since she's currently the assistant director of the CISE directorate, but otherwise, I'm not entirely affected by speeches that are calls to arms and whatnot. The presentation itself wasn't all that bad, though someone commented to me that the higher level position you are, the more vague your answers get. Totally amusing, yet totally true.

Following the keynote was the morning research track, which focused on reliability and monitoring. None of the papers really stood out to me, so I won't go into it, but you can see the research track papers here.

After lunch was another invited talk by Gerard Holzmann, who discussed the use of formal methods in software development (particularly in spacecraft, where correct software is obviously vital). For some reason, all of the larger rooms have their lights dimmed, so I was dozing off, but the basic idea was that after the initial setup, formal verification software is easy to use and helps a lot, but people are scared to use it for some reason, when they shouldn't be. When asked about the tradeoff between testing and formal methods for non-safety-critical software (such as GNOME desktop), I don't think he gave a very straight answer. He of course knows that the answer defaults to testing, but as to whether or not formal methods are truly beneficial and worth the time to understand, that was left unanswered.

The research track in the afternoon was on software tools and libraries. The sound of that to me just doesn't sound particularly researchy, but I was one of the volunteers assigned to it, so it didn't really matter what I thought.

The first talk was on IMP, a project from IBM to greatly simplify IDE development. Extending Eclipse is supposed to be comparatively complex, which is why the project was born. Essentially, you must implement a lexer/parser at minimum, and then add IDE services in at will. I know little about either project, but it sounds somewhat like Visual Studio Shell, which was released with Visual Studio 2008. It was an interesting talk, but not really from a research perspective.

The second talk was about bridging the gap between Java and C debuggers in order to debug JNI code, which essentially provides a unified stack in order to get a full stack trace when debugging problems that occur in JNI code. The content was mildly interesting, but I don't exactly deal with JNI, so it wasn't all that applicable to me. On top of that, the presenter was a little overbearing, and I'm certainly not the only one who thought the same.

The last presentation was presenting C#'s task parallel library. Even if my research is mostly in Java and I otherwise mostly try to use Python, I still consider C# to be a good language (and it's unfortunate that most researchers opt for the JVM). I never really read into the parallelism that is shipping with C# 4.0, so it was interesting, but again, it didn't feel very research-like to me. Seeing Parallel.For and futures in C# is nice, but I can't imagine it being very complex, conceptually. Rather, it was somewhat like Sunday's tutorial, where they were implementing already-known techniques in a pure OOP language.

This evening was the big, Hawaiian OOPSLA dinner. Unfortunately, I think Disney got their ethnicities mixed, because about half of the food was Chinese and the other half was American. Oops. For the curious: pork dumplings, crab rangoon, fried rice, vegetable stir fry, pulled pork, lemon chicken. Yeah, what part of that is from Hawaii? Apparently there's dancing at every OOPSLA, which was a pretty amusing thing to see. Other than that, the evening wasn't all that exciting. Since it was difficult to find a table, all the Purdue people ended up at the same tiny table with Tyler, from Iowa State, whom I met at the summer school. Not being a social butterfly and all, I feel awkward just walking up to strangers to talk to them, so I guess I missed out on a good opportunity. In fact, I haven't really met many new people at all at OOPSLA, just talked to some Purdue people more than I had at campus and meeting several of the summer school attendees that made it to OOPSLA.

Takeaways: from the research program, not much. I've had more interesting days. The formal methods talk leaves questions opened to be answered, but I have no real desire to enter that field, so I guess someone else will have to answer them. Though things like model checking are fairly interesting, I am certainly not one who indulges in formal methods and verification.

Tomorrow is the last day. It's been a tiring, but overall fun experience.

Tuesday, October 27, 2009

OOPSLA 2009, Day 3: Tuesday

Tuesday is when the research track began. The morning started off with a keynote from Barbara Liskov, this year's winner of the ACM Turing Award. I think it was quite an interesting talk, since I don't often hear historical perspective on the earlier days of computing. She gave the same speech as the one when she accepted the award; essentially how she came up with the ideas that she did that led to her impact on our field, and subsequently a Turing award. I skimped on the morning research papers, since it didn't look entirely interesting to me, in favor of getting more work done.

Tuesday's volunteer duties consisted of working at an info booth for 2 hours. I think in most cases, the info booth is just for people who don't feel like pulling out their programs to find where things are, or can't look behind them to find registration/the bathroom. The last half hour overlapped with the Onward! keynote speaker, so I skipped it as well, after seeing slides that didn't look too exciting and people leaving early. Turns out, it was probably a good choice.

The afternoon research track was on concurrency. I highlight two of the papers:

Grace, a safe, multi-threaded programming system for C/C++ was an excellent presentation, with a good chunk of humor that all people knowledgeable of concurrency know about. The idea was something like spawning processes to perform "fork-join" parallel computations, using mmap to share computations. Interestingly, they actually get initially better performance from spawning processes instead of threads. The Linux scheduler doesn't migrate threads to separate cores on a processor for some amount of time, while processes are migrated instantly, taking advantage of available cores much more quickly. The reason for this is related to caching/processor state, from what I understand. Since threads are part of a larger process, this typically means that they share data with other parts of the program. Processes, on the other hand, are typically isolated from each other, which means that immediately dumping it to another core doesn't affect cache performance.

The other highlighted presentation was on Thorn. Thorn is a collaboration, mainly between Purdue and IBM, to design a robust, concurrent scripting language. Unfortunately, the presentation was more about robustness than concurrency, even though it was in the concurrency track, but it was nice to see slides on the language, rather than guessing through the "documentation" that I saw in Thorn as an undergrad.

I think today's lesson is that computing history, unlike national history, is actually interesting. Hearing Barbara's perspective on things was very cool, since I think it's the first person who I've actually seen present their view on computing history, while actually having lived in and participated in that era. Most historical perspectives I hear seem to be peers that just know a lot more about history than I do.

Monday, October 26, 2009

OOPSLA 2009, Day 2: Monday

On Monday, I spent most of the morning trying to get some work done, in order to not completely screw myself over upon return. Since my job that morning was to be a "floater", I couldn't do anything anyway (a floater is someone who just sits around unless the volunteer captains need them to do something). I missed my advisor's keynote at the Dynamic Languages Symposium (DLS) as a result, but I'm pretty sure I've heard the material many times before.

I was able to attend the afternoon session of DLS, which had some interesting talks. A paper on type reconstruction of dynamic languages looked like an idea that had briefly crossed my mind awhile ago, but it was fully explored, so it was interesting to hear about it. There was also a talk on running a VM on a VM (mind blowing, right?) to take advantage of the host VMs JIT, in order to eliminate the need to write a JIT on the guest VM. I find [nearly] all things JIT exciting, so it was a good talk, even if their performance benchmarks were extremely slow.

The Ruby Intermediate Language was presented as an easy-to-analyze intermediate form of Ruby. It was mildly interesting, but personally, I don't find Ruby's hard-to-parse intricacies very interesting, as I ended up ditching Ruby for Python a few years ago. RIL basically eliminates the hard-to-parse syntax from Ruby, converting them to their easier-to-parse equivalents, with the goal of making analysis tools easier to write.

There was a complaint at the end where someone questioned the necessity of RIL instead of just providing a standalone parser (which RIL may have, since none existed at the inception of RIL), which I tend to agree with — even though I think source transformations that simplify the AST are common, I don't think they're usually as game-changing as the ones presented in RIL. For example, the Java compiler collapses concatenation of string literals down to one literal, and it's completely transparent (and irreversible, as far as I can tell) to compiler plugins. However, this isn't the same as eliminating the ambiguities in Ruby that make life "pleasant" or "natural" for Ruby developers. It would be equivalent to a "JIL" that converted Java for-each loops into the ugly for-loop equivalents, for example, only Ruby offers many more conveniences than Java.

The last talk of the day was about object heaps on manycore (56) hardware. David Ungar from IBM presented experimental results using a Smalltalk VM. It's always interesting to hear about VMs and concurrency (and this is both!), since the topics are often at the threshold of my knowledge. However, this presentation was less "controversial" than the RIL paper, so I don't have much to say about it, aside from feel free to read all of the mentioned papers, if they sound interesting.

An interesting day overall. After all, who doesn't like dynamic languages?

Sunday, October 25, 2009

OOPSLA 2009, Day 1: Sunday

Sunday's activities for me consisted of going to the VMIL workshop, which was quite interesting, and a tutorial on "realizing the benefits of functional programming in object oriented code." I was only able attend the morning session of VMIL, which was basically their invited talks on the VMIL website. They were all decent talks. It was especially interesting to hear about Maxine, which I had seen briefly mentioned on reddit some time ago.

Unfortunately, though I was looking forward the tutorial (which I was obligated to attend as the student volunteer on duty), it turned out to be somewhat disappointing. The concepts were not new to me, so I was probably not the target audience. C# actually has a Func type that corresponds to the Java version presented in the tutorial, and I'd seen currying in C# from previous investigations. I hadn't seen Java-style continuations before, but after lectures by Olivier Danvy on continuations this summer, the code was not too surprising. Nonetheless, the code was still interesting to see, as I am always fascinated by how other people program.

I think the most disappointing part was how little discussion there was on the benefits of shoving functional-style code (which isn't exactly pretty, in Java) into an OOPL, and deeper case studies into performance/safety benefits, etc. There was the obvious discussion about referential transparency and so on, but I'm looking for more than just some buzzwords.

The presenter was a little bit disorganized and lacking in presenter skills (tangential expositions, extremely small writing), but we all have our faults, and I'm sure he can improve in time. It is clear that he likes the subject, but not immediately clear that he likes talking about it. I am, of course, not being literal, since if he didn't like talking about it, he obviously wouldn't have volunteered to do the tutorial.

Overall, I was satisfied with the first day, since VMIL was nice, and the tutorial wasn't a total waste.