Thursday, December 19, 2013

Tracking down the Harvard Non-Bomber

This year, (allegedly) a Harvard student performed the modern equivalent of pulling a fire alarm in order to avoid a final exam, in this case by sending an e-mail claiming that there were bombs in several building throughout the campus.  (One of many Crimson stories here.) 

I am proud to say that this student, who was apparently a psychology and sociology major or a prospective psychology major (according to Crimson reports), was (allegedly) using TOR and Guerrilla Mail to try to cover his tracks.  (See, for example, this article.)  I think it shows how Harvard has made it as a computer science/engineering school, now that even our psych majors know how to set up and use tools like this.  Years ago, before CS started taking off at Harvard, you would be hard pressed to come up with a student from a liberal arts major who could use tools like this.  It just goes to show how the place has changed for the better.  I like to think that, if he was a computer science major, and would have correspondingly more understanding of what tracks he was leaving (hint:  don't use your own computer through Harvard's wi-fi when sending a bomb threat...), he might have gotten away with it, or at least been a lot harder to track down. 

[Just to be clear, this is very tongue-in-cheek;  I in no way support or even really want to make light of what this student did, it's utterly reprehensible.  And as several colleagues of mine and I have noted, he knew just enough to be dangerous-- mostly, in the end, to himself.

Also, I was (again, along with several of my colleagues) 95+% certain right off the bat it was a student trying to escape finals.  Besides the timing, the 4 buildings named as where bombs might be hidden included 3 big lecture buildings where exams were taking place... and a freshman dormitory.  (In fact, MY freshman dormitory.)  It seemed unlikely that the dorm would be on any real bomber's radar, and seemed to me to be a clear signal that one or more students were behind it all.]   

Monday, December 09, 2013

Lesson of the Day

Saturday I took two of my daughters to see a musical at Harvard.  Amazingly, in the small theater, we were in front of a pair of students who seemed intent on talking throughout the performance.  (One male, one female;  the male did seem to be doing more of the talking.)  The volume seemed to increase until by the end of the first act they seemed to be talking at normal conversation level.

As soon as the curtain hit I turned and as nicely as I could (which was, probably, still with a snarl) that there were several bars and cafes available in Cambridge if they wanted to talk, but we were here to watch a show.  I got several approving nods from around the nearby audience;  in fact, about a minute later, an usher for the theater came over and appeared to be telling them to be quiet or get out, so others had clearly complained.

To their credit were apologetic and stayed quiet for the second act.  I can only hope that I helped teach these students the important lesson that conversing in a theater is a very bad idea -- probably more important that most of what I ever teach in class.  (Although how they managed to get this far without absorbing that lesson somewhere is, I admit, beyond me.)  My older kids already know that, but they got some useful reinforcement.   

Wednesday, December 04, 2013

Algorithmic Growth (Class Size)

Pre-term planning numbers are in for Harvard, and it looks like the undergrad Algorithms and Data Structures class has about 175 people planning to take the course.  That's a huge jump again over the last few years (where it's jumped from the 50s to well over 100).  I imagine the growth is spurred by our ever-increasing enrollment numbers in the intro classes, as well as the fact that it's being taken over by a younger, dynamic, new faculty member.  (Go Jelani Nelson.  I can't help but think some number of students were waiting for me to go on sabbatical...)

These numbers are usually within plus-minus 10% of the final, though there's higher variance when new instructors take over.  If 175 became the steady state class size, it would mean a bit over 10% of the students at Harvard would take Algorithms at some point.  I don't think I ever expected that when I started. 

If we can get the people resources, at some point we'll probably want to start breaking this up.  One direction is to make an "honors" class that would cover more/harder material more rapidly.  (We're thinking of making this an "honors theory" course, that would cover both complexity and algorithms -- 2 classes packed into 1.)  The Math department here has done this very successfully, separating out the future Putnam winners from other students early on.  A benefit is it leaves the remaining students happier as well, as the curve-breakers remove themselves from the regular class.  Another possibility is to do an algorithms style class designed for non-majors, that would bring in more people not currently taking algorithms as well as some of those in the current course.  There are "topics" classes like this -- the Easley/Kleinberg book Networks, Crowds, and Markets: Reasoning About a Highly Connected World is algorithmic and seems to allow an instructor to choose a desired mathematical level from a broad range -- but I don't really know of examples of something more like a standard algorithms/data structures courses designed for a broader audience than for CS majors.  I'd be interested in pointers to and anecdotes about experiences in such classes if they exist. 

Tuesday, November 26, 2013

News Worth Reading

There's been plenty of interesting stuff popping up in the news, so it's time to collect a bit.

This article describes a current patent case in cryptography, where apparently Ron Rivest testified (by deposition) and Whitfield Diffie took the stand.  While I can't help but wonder if the article has dramatized the proceedings a bit, it does give some idea of what patent trials are like.  I found it interesting both legally and technically.  Anyone interested in the legal/expert witness side of technical work should find it worthwhile.

Update:  The case was decided, as Suresh notes.

In other legal news worth noting, Google Books was found to be fair use.  The decision is online (and available at the link) -- I'd definitely recommend reading it if you're interested in fair use and the legal framework for how fair use is currently (at least of this decision) is understood.    

Harvard's CS50 course just made the Boston Globe business section.  I'll quote the conclusion from the new boss, David Parkes:  “There’s a new willingness among the student body to take risks, to not follow what has been the default path of going into medical school or going into finance,” said David Parkes, Harvard’s dean for computer science. “I think part of it is that students are seeing a new way to contribute to society through computer science”

Michael Nielsen put up a chapter of the latest book he's working on, on neural networks. 

I found myself more interested than I expected when reading the article How Academia Resembles a Drug Gang.  The issue of PhD overproduction is already well known, but this brought in a new dimension for me, with the discussion of "dualisation" -- employment systems with an "insider" class with unusually high benefits (supposedly, from the article, tenured professors) and a large "outsider" class of people trying to get on the inside and supporting their lifestyle (from the article, untenured part-time faculty and PhD students).  Probably worth thinking about in terms of trends in our field, but also just generally, I'm now curious about the economics.  Dualisation doesn't seem like it would lead to long-term stable systems -- what's the model?

I'll have to pay more attention to the news myself these days.  I was asked to serve on the Communications of the ACM editorial board, news division, and found myself unable to find a suitable reason to decline. 

Finally, a question.  'Tis the season to purchase some new machinery.  So is retina display for my next laptop a must-have, not worth it, or somewhere in between?  I'd appreciate opinions. 

Monday, November 18, 2013

Easy Now

In the past year or so, I've gotten reviews back on multiple papers with the complaint that the result was too simple.  My interpretation of review: sure you might have come up with an efficient data structure that solved an at least seemingly interesting and/or useful problem, but wasn't the underlying math and/or data structure approach "too easy"?  (The reviews were either from theory conferences or, my interpretation, from theoretically minded reviewers in settings where there might have been a mix on reviewers.)  I'll admit, it's a bit disheartening.  And I'll also acknowledge that blogging about it probably seems like sour grapes.  But here it goes.   

From my standpoint, easy is a plus, not a minus.  I've taken to describing it this way.  In computer science we're used to measuring algorithms and data structures in terms of the two basic tradeoff costs -- time and space.  The third basic cost -- error -- takes some people a while to get used to.  That is, your algorithm can be "wrong" (either probabilistically, or in that it gives an approximate answer) in some hopefully well-defined way in order to trade off against time and space needs -- many times you're willing to settle for a good answer instead of the optimal answer if it's much quicker.  But there's also a 4th basic tradeoff cost -- programmer time -- that I find the theory community is happy to just ignore.  Simple is good, because more people will use simple things, even if they're not the most efficient possible, because often the usual time/space efficiency isn't really the bottleneck.  Coding up something that works is.  This is why Bloom filters show up in most of my talks (Suresh, drink!);  for me they provide an outstanding example of issues related to the 3rd and 4th tradeoff costs.

But I was inspired to go ahead and post something about this because of the following from Paul Krugman's blog today.  (His title is "The Power of Two" -- if that's not a sign, what is?)  I figured he says it (everything?) better than I could, though you'll have to substitute CS keywords for economics keywords in the below.  
If this strikes you as too easy, and you think that real economics should involve harder math, well, I feel sorry for you — you just don’t get what it’s all about. (You’re what Rudi Dornbusch used to call a “fearful plumber”). And by the way, coming up with a really simple formulation of what seems at first like a very hard problem can take a lot of work. It sure did in the case of the MF lecture, where I actually did start with a really ugly saddle-path thingie until I realized that formulating the sudden stop the right way would make all of that go away.

Simple doesn’t mean stupid. Thinking that it does, does.
     


Tuesday, October 29, 2013

Visiting UC Davis

Before returning home last week I spent a day at UC Davis giving a talk.  I got the spend the day talking to various people I've collaborated with there (like Raissa D'Souza, Nina Amenta, and John Owens), and enjoyed getting to see and talk with Charles (Chip) Martel, who when he's not busy as a computer science professor is busy being a world championship bridge player.  I spent several nights up late last month watching him play (via bridge's version of an Internet broadcast) online in the world championships in Bali. 

The big news was that Davis looks to be hiring this year, and one of the areas might be some variation of "big data", with theoretical types definitely included that category.  Davis is nicely located about an hour's drive north of Berkeley (in reasonable traffic), and enjoys standard California amenities.  (The day there was filled with ridiculously nice weather and excellent food.)  If you're job searching, you should look for their job announcement.

Thursday, October 24, 2013

Mitzenmacher Drinking Game?

I've been visiting the Simons Institute for one of their workshops the last few days.  I got my advisor Alistair Sinclair to give me a tour.  I have to say, that's an amazing space they have.  A very nice building, and they've set it up wonderfully (lots of light and open working spaces).  I can't believe their location on the Berkeley campus. 

After my talk, someone told me (and now I forget who) that someone (Michael Goodrich?) said that the Mitzenmacher talk drinking game was that you took a drink every time a slide says "Bloom" or "cuckoo".  That would be a dangerous drinking game.  (It's funny because it's true.)

Sunday, October 20, 2013

NSF Coming Back Online

I was impressed how quickly the NSF got Fastlane back online.  (I think it was turned on a few hours after the shutdown was ended.)  But like many people, I'm awaiting to hear how the missed deadlines (or upcoming deadlines) will be manged.  I haven't seen word on that on the web site, but looking online I found this article.  It suggests that "BFA will establish and publish on the NSF website within one week agency-wide policies for proposal deadline extensions and other grant-related actions."  (I am not sure what the BFA is.)  That pretty much matches my expectations.  I can imagine it's very difficult catching up after being shut out of your office for two weeks, especially when having to deal with the concerns of hundreds/thousands of scientists who want to find out what's going on.  I'm not at all surprised that they have to take several days to figure out what the best way forward is.  Heck, it probably will take several days just to catch up with the weeks of e-mail and other paperwork. 

I think the article is encouraging patience on our part.  If you're awaiting word, just check into the NSF web site this week.  If you see or hear anything important, try to help spread the word.  The NSF has, in my experience, been very reasonable about dealing with things, and I'm sure they're working to figure out a way forward that is effective and fair to the people waiting to turn in proposals or otherwise participate in NSF activities.

Thursday, October 17, 2013

Simons Research Fellowships Post

Alistair Sinclair asked me to post a note about the Simons Research Fellowships positions at Berkeley.  But since that's where Justin Thaler (my now-ex student) ended up, I thought he could put in a few words about his experience to add a personal effect.  Justin says...
Having been a Research Fellow at the Simons Institute for a couple of months now, I cannot speak highly enough about the experience. Others have already given a sense of how exciting it is to be here, so I'll just briefly list some of the things I find most striking about the place.

* The atmosphere is surprisingly collaborative, even for a place specifically designed to foster collaboration.
* Any time I have a question I can't answer, there is an expert's door I can immediately knock on.
* There's a great mix of junior and senior people.
* It's particularly fun hanging out with other Research Fellows! And I look forward to every Friday, when we all meet for lunch and one of us gives a short, informal talk on, well, whatever that person wants to talk about.
* Seriously, what's not to like about having no non-research responsibilities, surrounded by dozens of top researchers in the same boat?
With that very positive description, here's the formal information:

================================================================

The Simons Institute for the Theory of Computing at UC Berkeley invites applications for Research Fellowships for academic year 2014-15.

Simons-Berkeley Research Fellowships are an opportunity for outstanding junior scientists (up to 6 years from PhD by Fall 2014) to spend one or two semesters at the Institute in connection with one or more of its programs.  The programs for 2014-15 are as follows:

* Algorithmic Spectral Graph Theory (Fall 2014)
* Algorithms and Complexity in Algebraic Geometry (Fall 2014)
* Information Theory (Spring 2015)

Applicants who already hold junior faculty or postdoctoral positions are welcome to apply. In particular, applicants who hold, or expect to hold, postdoctoral appointments at other institutions are encouraged to apply to spend one semester as a Simons-Berkeley Fellow subject to the approval of the postdoctoral institution.

Further details and application instructions can be found at http://simons.berkeley.edu/fellows2014.  Information about the Institute and the above programs can be found at http://simons.berkeley.edu.

Deadline for applications: 15 December, 2013.

Wednesday, October 02, 2013

Some Advice on Entrepreneurship from the AH meeting

At the Andreessen Horowitz academic round table (see past post), there was various advice, some of it contradictory, for professor-types interested in starting companies.  I should start by saying that all of this is my interpretation of the advice, and the various people involved are not responsible if I've gotten it wrong.  Certainly further opinions are welcome.  

At the research level, Nick McKeown expressed some of his rules for research projects.

1. Pick a problem that is intellectually interesting.
2. And improves the practice.
3. And industry doesn't like (yet).

His idea (my interpretation!) was that if industry liked the idea, then the problem wasn't out there enough in terms of a time horizon.  Also, given the resources in industry, if industry really liked an idea, they could throw more resources at it than researchers at a university.  This idea had some pushback.  For example, Michael Franklin said the AMPLab at Berkeley had a lot of enthusiasm from industry, and that the industry support and interest was very positive in terms of their success.  (AH very recently funded a startup that came out of the AMPLab.  And the AMPLab is very impressive in terms of its resources, including people -- lots of great faculty.)

I will say that part of Nick's conception resonated with me.  When I've expanded my research into new areas, I've found at times that people in that area can be very negative.  And when that happens, it often turns out to be the most interesting research.  The work on low-density parity-check codes from way back when was roundly criticized by multiple old-guard coding theorists when we first presented it, and then suddenly there were entire sessions at coding theory conferences on the subject.  If you're inspiring angry reactions, you may indeed be on to something in your research.  (Of course, as he also acknowledged, you may also just be wrong.)

Another key issue that arose was "commitment".  The VCs at AH expressed some skepticism for professors who wanted to take a year (or maybe two) off to help set up a company but then hand it off and go back to their regular job.  Besides investing in an idea, they're investing in a team, and it's not a great sign if the team leader isn't committed.  Also, there's the feeling that that sort of change in leadership can have a huge transition cost.  (Also, I think, as I mentioned previously, they really seem to like working with tech CEO's.  Handing a company off may remove the "tech" leadership.)  They were fine with a model where it was professors and graduate students starting a company, and the commitment was coming from the graduate students;  in that case, a "part-time" professor founder could be workable.

I personally think the "commitment" issue can be a challenge.  It's a problem for me, with liking my regular job so much.

There was various talk about patents.  Most of the crowd were against making them a priority in starting a business, and recommended not getting them.  A patent, it is said, is just a license to sue, and who needs or even wants a license to sue?  Making your work open-source to get excitement and interest, and then commercialize after that point, can be a very successful business model, and maintains the academic desire to get the basic work out to a wide audience.  But perhaps most importantly, as an academic, patenting your work means dealing with your school's version of the Technology Licensing Office, and nobody says anything good about their Technology licensing offices.  (Even the west coast schools -- the best anyone said is that generally theirs would stay out of your way.)  A patent gives your TLO license to ask for whatever percentage of your soon-to-be company they feel like asking for, and until they sign off, it can be hard to get a VC interested in a company where another entity holds the patent.  And generally speaking, your success is not their performance metric.

(Two quick additional points related to TLOs.  Many noted that most TLOs have been brought up by the medical school/biology groups in the university, where patents matter a lot and licensing is a strong way to go.  That's much less so in CS tech.  Also, while you'd think TLOs would be thrilled to have professors/graduate students start companies, get very wealthy, and donate back to the university -- that doesn't seem to be their model, as enlightened as it might seem to us.)  

I certainly know some cases where patents at least seem to me to have been important to "startup" companies.  (Akamai/Limelight?)    But the rest of what was said seemed to make a lot of sense.   

One thing the VCs emphasized is the importance of timing.  They said the idea behind many startups had actually been around for a while, the subject of study in universities or labs, but often the timing has to be right to make the move into the product space.  Sometimes other technology has to catch up, or something else in the environment has to go your way.  A lot of "failures" may not be failures of the idea, but just not the right timing.  

It was expressed that startups that have a mission to change the world in some interesting way were better off than startups whose mission was just to make money.  In particular, it can help recruit the best and the brightest, creating a very powerful positive feedback loop.  Simply stated missions -- Google says theirs is to "organize the world's information and make it universally accessible and useful" -- can be quite powerful.  The cynical might question people's motivations and whether money is really what's behind the mission, but either way, missions can be useful.

Finally, and perhaps this isn't so surprising, but the best way to connect with VCs is probably through personal connections.  If you know someone that's worked with the VCs, get them to introduce you.  The VCs apparently get thousands of proposals sent to them a year, and very few of those get consideration.  Having someone they trust vouch for you means a lot in terms of them being willing to make time to listen to you.  That's not unlike aspects of academia (the importance of letters for jobs/graduate school;  in some cases connections being helpful for getting funding from industry).  While not surprising, it seems worth saying. 

Tuesday, October 01, 2013

Entrepreneurship and the Curriculum

At the Andreessen Horowitz academic round table (see last post), the issue of how to promote student entrepreneurship through the curriculum arose.  The VCs at AH (which I'll use for short hereon) want there to be more tech-based CEOs.  As they put it, it's easier to teach a tech person what they need to learn about business than to teach a business person what they need to learn about the tech.  Somehow, in most universities and I believe in the world at large, the culture has developed that the business people think they're the powerful ones, not the tech people who build the things that consumers love.  The business people think they're the ones delivering the value, and then divide the value accordingly.

Don't believe me?  Go see http://whartoniteseekscodemonkey.tumblr.com/ , a site (that came up in the discussions) devoted to the e-mails sent by Wharton business school people looking to hire (undergraduate) programming and engineering talent.  As a faculty member I get bunches of these sorts of e-mails a month, and I'm sure the computer science students do as well.  The underlying message is that the tech people are commodity cogs to be plugged in as needed.  That's not the message we want our students to get, and not how things really work in most successful startups.  

So AH says they believe in and support the tech CEO, and want to encourage that.  What does that, and entrepreneurship generally, mean for our curriculum?  Should CS departments have courses on entrepreneurship (or give credit for classes in other departments on the subject)?  Should we teach computer languages that are the latest on the start-up scene (in preference to those that, arguably, provide a larger conceptual framework or encourage certain ways of thinking)?  Should we have an "entrepreneur track", like we might have a theory track or AI track or computer science and engineering track?  What is the school's role at the department's role in encouraging entrepreneurship?  Some people thought CS departments perceive themselves as professors in the business to make more professors, and thereby ignore the potential CS has to change the world via business.

These are tough questions.  One issue that makes it even more problematic for CS is that these problems are not faced by many other parts of the university -- literature, history, and even most of the social sciences don't have a significant start-up scene -- which means in some ways, we're on our own.  Indeed, significant parts of the university might actively resent an emphasis on entrepreneurship, which they might argue does not always fit so well with the university's educational mission.  (Or,  perhaps, it just represents self-interest on their part.)

Aged fuddy-duddy that I am, I'm somewhat sympathetic to this view.  Computer science is science.  I want to educate students about the great questions (and answers) of computer science, and I am thrilled to be educating the next generation of scientists, especially computer science.  But yes, computer science is also engineering (in the practical sense of the word), providing the ability to solve immediate problems, yielding economic benefits to the users and of course the developers of the solutions.  I see striking the right balance as a challenge;  greedily, I do somehow want both.

At Harvard, I feel we've been pushed and pushed ourselves to make the requirements for the major quite minimal (in terms of the number of classes), so I want those required classes to be on the "science" side.  I want computer science graduates to have both breadth and depth in computer science.  Much of the entrepreneurship can naturally fall outside the curriculum -- there are now a number of student organizations, and university-level initiatives, to promote entrepreneurship.  (Harvard, I think, has been finding a way to expand the concept of entrepreneurship beyond just "business" -- into the larger concept of innovation -- to make it more appealing throughout the university.  For example, check out the i-lab.)  At the same time, I'm clear that having all the entrepreneurship activities fall outside the traditional curriculum potentially pushes a set of students away.  Again, we're left with finding the right balance, for us.

Sadly, the meeting's discussion on this only lasted a short while, and I felt left with more questions than answers.  Feel free to discuss your thoughts here.  

Monday, September 30, 2013

Notes from the Andreessen Horowitz Academic Round Table

I've spent the last couple of days attending the Andreessen Horowitz Academic Round Table.  Andreessen Horowitz is a leading venture capital firm, founded by Marc Andreessen and Ben Horowitz.  The goal of the meeting as it was described to me was to get a number of academics and entrepreneurs together, along with the VCs, and get to know each other in an non-pressured environment.  Andreessen Horowitz seems very interested in promoting the technical CEO (as opposed to the "MBA CEO") as a viable option for startups, and seems interested in establishing long-term connections with universities.  They're obviously very well connected at Stanford and Berkeley, but recognize that there is startup potential at other universities that is probably underutilized.

Most of the program was a series of short technical talks, mostly by academic types, including many academics with start-up experience.  Some talks were also by people leading start-ups that Andreessen Horowitz has invested in.  The talks were of a uniformly very high quality.  It was like a super-interesting conference, perhaps made more interesting because of the breadth of computer science areas covered.  The speaker list included Susan Athey, Ed Felten, Michael Franklin, Michael Kearns, Nick McKeown, Rob Miller, Andrew Ng, Tuomas Sandholm, and Stefan Savage, just to name some.  On the company side, the speakers included Ben Milne of Dwolla, who explained a lot of the cruft that has been built up in the money system in terms of moving money around, and how he was getting rid of it. Vijay Balasubramaniyan of Pindrop Security described their fraud detection measures for phone-based fraud.  Jonathan Downey of Airware discussed developing open architectures for the commercial drone market.  (And more.)

Before I go on, the most entertaining talk bit:  Rob Miller showed a demo of a crowdsourcing program for shortening your document -- to get you down to that 10-page limit when you're a 1/4 page over, and you're fed up hours before the conference deadline.  (Looking online, I think this must be the Soylent project.)  I hadn't seen it before.  For me, as a CS professor, that's a great example of crowdsourcing.

Beyond the technical talks, there were some interesting discussions about the connections between research and entrepreneurship, about how we might teach our students so that they can be entrepreneurial both in technology and in business, about what VCs like Andreessen Horowitz are looking for in (faculty-based) startups, and about how many universities seem to get in the way rather than help their faculty (or graduate students, or even undergraduate students) start companies.  (It seemed like the most controversial topic involved who had the worst, most unhelpful technology licensing office...)  In many ways, these discussions felt more interesting than the talks, and I wish we had more time for them -- even though the talks were really good, this was stuff that was new to me, and where the people in the room were experts I wanted to hear from on these topics.

There was much too much fodder at this meeting for a single blog post, so I'll try to cover some of these themes a bit more in depth over the next few days.  One thing that might come up in the future -- that I think would be great and recommend (to them and whoever might be able to attend) -- is that they might do another meeting like this but focus on inviting graduate students.  After all, professors already have exciting jobs they enjoy and are attached to -- graduate students may be a bit more eager to take their ideas and take the leap to try build a company out of them.






Saturday, September 28, 2013

Yelp Reviews News

Giorgos Zervas is in the news again (with Michael Luca of Harvard Business School), this time for his work on filtered/fake Yelp reviews.  See also this piece in the Wall Street Journal blog.  High-level issue:  filtered reviews are on the rise, as businesses recognize that social media reviews can be important to their business.  The work is quite timely given the recent NY Attorney General's investigation into fake reviews that led to a number of fines.

Friday, September 27, 2013

Dina Katabi, MacArthur Genius

It's always nice to see a computer scientist be on the list for an award that spans over multiple disciplinary areas.   This year, we get to congratulate Dina Katabi for earning a MacArthur Fellowship.  Dina's work focuses in wireless, but seems to span more areas than you could imagine.  She was part of the recent team that came out with the breakthrough work on sparse FFT.  She's done some of the most well-known work in network coding (XORs in the Air).  And recently, she's been working on using wireless signals to see through walls.

The official citation is here.  (Apparently, Dina is the "894th" Fellow.) 

Thursday, September 19, 2013

Reviewing Bad

A pun title, relating to two quick things.

First, I had the wonderful experience of getting to see (through a special deal set up by Harvard's faculty development office) All the Way at the American Repertory Theater.  It's a new play following the history of Lyndon Johnson (and Martin Luther King) from the November 1963- November 1964 time period (from when Kennedy was assassinated to when Johnson won the presidential election).  I was silly enough to not realize when I got the tickets that Bryan Cranston, of Malcolm in the Middle Breaking Bad fame, was playing Johnson as the lead.  It was a really fantastic show.  (Hey, the rest of you in Harvard CS who aren't going to these things -- why not?  They're awesome!  Krzysztof was the only one I saw this time...)  Well acted and fascinating history.  The cast was amazing (and large -- I think 17 actors total), and I kept recognizing them from TV.  My inner gushing fan was set off by Michael McKean -- at one point, some of the actors were out in the audience area, and I excitedly noted McKean was about six feet from me.  (I chose not to seek his autograph given the performance was going on at the time.) 

[Note -- sadly, the show is already sold out... at least for this run.]

Ah, then the bad news.  After being on the executive PC for STOC 2013, I heard from multiple colleagues afterwards who had their papers rejected about what they felt was the low quality of reviewing.  (In my defense, I commiserated with several of them at the time.)   So, after getting the reviews from the SODA PC (for my rejected papers), I feel obliged to comment.  Quality-wise, they're terrible.  (Not universally so... but some of them....)  I was going to put in specific examples, but after the heat of the moment died down, my cooler head prevailed and determined that was inappropriate.   But suffice to say that beyond the usual we don't understand the motivation type stuff, there are comments that are factually wrong that betray fundamental misunderstandings, and opinions regarding "what's important" in the paper that are -- in my experience -- just off the mark.  I've been through it before -- you suck it up, find the useful comments, rewrite, and re-submit.  But it is disturbing (from both sides, as the receiver of reviews and as one helping manage the reviewing process), and worrisome if it's an increasing problem for many submitters.    

Friday, September 06, 2013

Guest Post by Justin Thaler: A "Mini-Survey" of Practical Verification Algorithms

Justin Thaler, after presenting his most recent work on verifiable computation at Crypto, decided to write up a mini-survey on recent work in the area, and he offered to make it a blog post.  It's a somewhat longer-than-average blog post,  but well worthwhile.  So without further ado, here's Justin.

------

In the last few years there has been an enormous amount of work devoted to the development of "practical" protocols for verifiable computation, with multiple approaches being pursued across several research groups. It has recently become clear to me that the sheer volume of work and the sometimes subtle relationships among the various approaches has made the area difficult to follow. The goal of this post is to provide a reasonably concise description of the various approaches and the pros and cons of each, with the hope that this will serve as a handy reference for those interested in the area, at least until a more thorough and formal survey can take its place.

==============

Some Quick Background:

The field of verifiable computation seeks to develop protocols that allow a computationally weak verifier (such as a cloud computing user) to send its data to a powerful but untrusted prover (such as a cloud computing service). The verifier can then ask the prover to perform some computation on the data, and the protocol should enable the prover to return the output of the computation along with a guarantee that the output is correct. Concretely, the ultimate goal is to start with a program written in a high-level language like C and automatically ensure that the prover faithfully executed the program.

The only background required to understand this post is having encountered interactive proof systems and argument systems . To review, in an interactive proof, the prover tells the verifier the output of a computation, and then they have a conversation during which the prover tries to convince the verifier that the output is correct. Any interactive proof must satisfy two properties: the first, called completeness, says that an honest prover will convince the verifier to accept, and the second, called soundness, says that a dishonest prover will be caught with high probability even if the dishonest prover is computationally unbounded. An argument system is an interactive proof system that is sound only against polynomially time provers.

 ===============

Sources of Overhead:

I will focus on projects that are reasonably general-purpose, require only one prover, and have attempted to refine theory toward implementation. In the general case, all approaches satisfying these criteria work by first turning the high-level computer program into an arithmetic circuit or a set of "constraints", and then using complexity-theoretic or cryptographic machinery (or both) to check that the prover correctly evaluated the circuit or satisfied the set of constraints. I will refer to the "high-level program ==>; circuit/constraints" transformation in any system as the 'front-end', and the actual protocol used to check that the prover correctly evaluated the circuit as the 'back-end'.

Thus, there are two primary sources of overhead in existing systems: the overhead in the front-end (as some computer programs that run in T machine steps may only be computed by circuits with far more than T gates), and the overhead in the back-end (i.e., the extra work required for the prover to evaluate the circuit with a guarantee of correctness, relative to evaluating the circuit with no guarantee). The existence of two sources of overhead complicates the comparison among many of the papers that I will discuss, as work on front-ends and back-ends has been interleaved, some back-ends work with more general circuits than other back-ends (and hence can interface with more efficient front-ends), and there has been cross-pollination among systems. I will attempt to clearly delineate these issues below by focusing on contributions to the design of back-ends and front-ends separately, beginning with the former.

=================

Summary of Back-Ends:

Existing back-ends fall into three broad categories: interactive proofs, argument systems with pre-processing, and argument systems without preprocessing. I will describe each of the three approaches below, starting with the most efficient but least general approach and moving toward more general but less efficient approaches.

1) Interactive Proofs: In 2008, Goldwasser, Kalai, and Rothblum (GKR) gave a powerful interactive proof protocol for circuit evaluation. The costs to the verifier in this protocol grow linearly with the circuit depth, so the protocol only achieves an efficient verifier if the circuit has small depth (equivalently, if the function computed by the circuit has an efficient parallel algorithm). Fortunately, many of the computations actually being outsourced do exhibit large amounts of parallelism. In 2012, Cormode, Mitzenmacher, and I implemented the GKR protocol plus refinements -- most notably, we brought the runtime of the prover down from Omega(S^3) in a naive implementation to O(S log S), where S is the number of gates in the circuit. Our implementation demonstrated that the concrete costs to the verifier are very low, but there were two downsides to the implementation: First, despite our refinements, the prover runtime remained a bottleneck (the prover took roughly three orders of magnitude more time than that required to evaluate the circuit with no guarantee of correctness). Second, unless the circuit satisfied a certain 'regularity' condition on its wiring pattern, the verifier required an expensive pre-processing phase to "extract" information about the wiring of the circuit.

Very recently, I addressed these bottlenecks by showing the following. First, for 'regular' circuits, the runtime of the prover can be reduced from O(S log S) to O(S). Concretely, for these circuits, the prover now runs about 10 times slower than what is required to evaluate the circuit gate-by-gate with no guarantee of correctness. Informally, a circuit has a 'regular' wiring pattern if there is a way to label each gate g with a binary string such that the labels of the in-neighbors of g are very simple functions of the label of g (this condition is satisfied by natural circuits computing naive matrix multiplication, pattern matching, FFTs, and several other problems).

Second, for any data parallel computation (i.e., any setting where an arbitrary sub-computation C is applied independently to many pieces of data, before possibly aggregating the results), the cost of the pre-processing stage for the verifier and the overhead for the prover can be made to depend only on the size of the sub-computation C, and not at all on the number of pieces of data to which C is applied. Essentially, this holds because the data-parallel computation is 'regular' at a higher level of abstraction than in the first result -- while the sub-computation C may have very irregular wiring, the various invocations of C do not interact at all.

Third, I gave a simple protocol for matrix multiplication that avoids the circuit representation entirely, allowing the prover to compute the correct answer using an arbitrary algorithm, and then spend O(n^2) extra work proving the answer is correct.

[Aside on Matrix Multiplication: It is worth comparing this new matrix multiplication protocol to classical approaches like Freivalds' algorithm. The latter requires no interaction and *no* extra work for the prover. The advantage of the interactive approach is most evident for computations that invoke matrix multiplication as a means to an end rather than as an end in itself. For example, the best-known algorithms for computing the diameter of a graph work by repeatedly squaring the adjacency matrix. Running Freivalds' algorithm to verify these algorithms would require the prover to send the intermediate matrices to the verifier, which would be terabytes of data even for graphs on 1 million nodes. The interactive approach allows the prover to send only the right answer (which is just a single number), and then spend a few KBs of extra communication and a low-order amount of extra work proving the answer is correct.]

2) Argument Systems with Pre-Processing: In 2007, Ishai, Kushilevitz, and Ostrovsky gave an argument that required an expensive pre-processing phase for the verifier, but avoided the use of short PCPs (short PCPs are complicated complexity-theoretic protocols upon which most argument systems were traditionally based -- see the section on Argument Systems without Pre-processing below). Systems with pre-processing are primarily of interest in data-parallel settings, because the verifier can only save work when the set-up costs are amortized over many instances of the same computation (i.e., when the same computation is applied independently to many different inputs).

In 2012, Setty, McPherson, Blumberg, and Walfish began refining IKO's protocol. They called their first system Pepper, and subsequent refinements were described in a follow-up system Ginger. Further follow-up work incorporated an ingenious encoding of computations due to Genarro, Gentry, Parno, and Raykova (GGPR), and resulted in a system called Zaatar that (mostly) supercedes Pepper and Ginger.

Parno, Gentry, Howell, and Raykova introduced another system called Pinocchio that is also based on the theoretical innovations of GGPR. Also, very recent work by Ben-Sasson, Chiesa, Genkin, Tromer, and Virza (BCGTV) implements a back-end that is closely related to Pinocchio.

The high-level comparison between the Zaatar back-end and the Pinocchio/BCGTV back-end is the following. Zaatar is somewhat more efficient because it uses less cryptographic machinery (roughly, the verifier in Zaatar poses queries in the clear, while Pinocchio/BCGTV require the prover to compute over encrypted queries.) However, Pinocchio/BCGTV provide functionality that Zaatar does not. Specifically, the former supports public verifiability and zero-knowledge. Additionally, it consists of only two messages, one from the verifier to the prover and one from the prover to the verifier. Finally, it allows set-up costs to be amortized over an indefinite number of instances of the same computation -- in contrast, the pre-processing costs of Zaatar can be amortized over many instances of a computation only if all instances are verified at the same time. It is worth mentioning that the Pinocchio/BCGTV back-ends are based on non-standard cryptographic assumptions, while Zaatar is based on standard ones.

3) Argument Systems without Pre-Processing: Ben-Sasson, Chiesa, Genkin, and Tromer (BCGT) have been pursuing argument systems that avoid a pre-processing phase for the verifier. These argument systems are based on short PCPs, and while existing work on this topic is still theoretical in nature (focusing on reducing the concrete costs of existing PCPs), implementation efforts are reportedly underway.

===============

Comparison of Back-Ends:

Comparison of Argument Systems without Pre-Processing to other approaches: The advantage of the short-PCPs approach relative to interactive proofs and argument systems with pre-processing is that the former *never* requires a pre-processing phase for the verifier. The disadvantage is that the overheads to the prover are likely to be substantially higher than they are in argument systems with pre-processing. For example, the short PCPs described by BCGT require sequential repetition of the protocol many times in order to drive down soundness error. Like Pinocchio and Zaatar, they also require performing FFT operations on objects as large as a computation transcript, but unlike Pinocchio and Zaatar, these short PCPs also require working over fields of small characteristic (and hence can only be combined with front-ends that generate circuits over these fields), require the use of non-trivial finite field algorithms, and require circuits that satisfy certain algebraic notions of regularity. Precisely how large these overheads are in practice remains to be seen.

Comparison of Interactive Proofs to Argument Systems with Pre-Processing: The advantages of interactive proofs are three-fold. First, they are secure even against computationally unbounded provers, while argument systems are only secure against polynomial-time provers. Second, interactive proofs can avoid pre-processing entirely for structured computation (e.g. 'regular' circuits) and minimize pre-processing for data-parallel computation, while argument systems with pre-processing (discussed below) inherently require an expensive pre-processing stage for the verifier. Third, interactive proofs achieve unmatched prover efficiency when they are applicable, and in fact can avoid the circuit representation entirely for fundamental primitives like matrix multiplication. Concretely, Zaatar's prover is roughly 3 orders of magnitude slower than evaluating the circuit with no correctness guarantee (see e.g. these slides by Mike Walfish), which is roughly commensurate with the prover overhead in my original GKR implementation with Cormode and Mitzenmacher, but is a couple orders of magnitude slower than what we can now achieve with interactive proofs for "regular" circuits (see above).

The disadvantages of interactive proofs are the following. First, they are only applicable to small-depth circuits (i.e., parallelizable computation). Second, they do not support "non-deterministic" circuits, which can be extremely useful for turning high-level programs into small circuits as described in the section on front-ends below (there is evidence that lack of support for non-determinism in inherent in the use of interactive proofs). This means that certain kinds of computations such as those involving many random accesses to memory or sorting/comparison operations are problematic for interactive proofs. Third, interactive proofs cannot support properties like zero-knowledge and public verifiability that are achieved by Pinocchio/BCGTV. Fourth, interactive proofs require logarithmically many rounds of interaction between prover and verifier, while argument systems typically require just one or two rounds.

(I maintain, however, that the round complexity of interactive proofs isn't a big deal. First, interaction can be generically removed via the Fiat-Shamir heuristic, which is secure in the random oracle model and may be acceptable in practice. Second, any system allowing the verifier to request a specific computation in the first place is likely to involve a remote procedure call and hence incur round-trip delays anyway. Third, browsers already send a separate request for every image and script on a web page, and typically a browser cannot substantially parallelize these requests.)

====================

Summary of and Comparison of Front-Ends:

Ginger, Zaatar, and Pinocchio all contain compilers that turn high-level programs into (non-deterministic) circuits (Pinocchio was the first to handle a subset of C, while the others initially handled a different high-level language but have since moved to C). Vu, Setty, Blumberg, and Walfish (VSBW) built a front-end for an implementation of the GKR protocol (this front-end must generate small-depth circuits that are either deterministic or contain a small number of non-deterministic inputs, since the non-deterministic inputs must be sent explicitly to the verifier. It should also be clarified that their compiler does not produce regular circuits, and hence the GKR protocol requires a pre-processing phase when applied to these circuits, and amortizes these costs over many instances of the computation exactly as in Zaatar/Ginger/Pepper). Given a high-level program, the VSBW front-end also automatically determines whether Zaatar or GKR would be most efficient to verify the computation and runs the better of the two protocols, and their system as a whole is called Allspice.

However, none of the front-ends above efficiently supports programs that utilize random access to memory. Two recent front-ends change this. One is the front-end developed by BCGTV mentioned above, which implemented earlier theoretical work by the first four authors. Given a high-level program, their approach to circuit generation roughly works as follows. They first compile the C program into actual machine code for a simple Random Access Machine (RAM). They then generate a circuit which takes an entire transcript (sorted by time) of the RAM computation as a non-deterministic input, and checks that the transcript is valid. This requires checking the transcript for both *time consistency* (i.e., that the claimed state of the machine at time i correctly follows from the machine's claimed state at time i-1) and *memory consistency* (i.e., that every time a value is read from memory location x, the value that is returned is equal to the last value written to that location).

Their circuit checks time-consistency by representing the transition function of the RAM as a small circuit (they have managed to represent the entire transition function as a circuit with ~1000 gates, at least for short RAM computations of < 1 million steps. While this is still a lot of gates, it is a substantial engineering achievement). It then applies this circuit to each entry i of the transcript and checks that the output is equal to entry i+1 of the transcript. Their circuit checks memory consistency by using routing techniques to re-sort the transcript based on memory location (with ties broken by time), at which point it is straightforward to check that every memory read from a given location returns the last value written to that location.

The second front-end that supports random access to memory is called Pantry, developed by Braun, Feldman, Ren, Setty, Blumberg, and Walfish. They augment Zaatar and Pinocchio's front-end to support random access to memory by using Merkle-hashing techniques -- this offers an alternative approach to the routing-based technique for ensuring memory consistency pursued by BCGTV. (Interestingly, the Merkle-hashing approach was explicitly considered in BCGT's theoretical work, but has not yet been incorporated into the BCGTV system).

Merkle trees allow one to outsource memory maintenance by building a binary tree whose leaves correspond to memory locations. Each leaf stores the value contained in the corresponding memory location, and each internal node stores an evaluation of a collision-resistant hash function applied to its children. As long as the verifier knows the value of the root, every time a memory location (leaf) is accessed, the prover can 'prove' that the returned value is correct by revealing all values along the leaf-to-root 'witness path'. One can show that the only way the prover can lie is by finding a collision in the hash function.

Pantry identifies a collision-resistant hash function whose evaluation function can be (relatively) efficiently represented as a circuit or set of constraints. This allows evaluation of the hash function (and hence checking of 'witness paths', and hence support for random access to memory) to be folded into Zaatar's and Pinocchio's verification machinery.

Pantry's use of collision-resistant hash functions is also useful for some applications (such as MapReduce) that do not necessarily require random access to memory -- for these applications, Pantry avoids the use of a full Merkle-hash tree, reducing costs relative to what is required to support general RAMs.

Comparison of Pantry and BCGTV:

Generality: BCGTV can handle arbitrary C programs. Pantry comes close to achieving this but still cannot handle data-dependent loops. BCGTV also has the advantage that it can be modified to generate circuits that satisfy the algebraic regularity conditions required by back-ends based on short PCPs (though implementations of these back-ends remain in progress). One important aspect of Pantry is that it enables the outsourcing of computations that are run over "remote inputs", i.e., inputs that the verifier never sees in full, but is merely provided a digest (typically a hash, or a Merkle-hash) of the input. This is important in several applications, such as verifying MapReduce computations. In principle, this feature could be incorporated into the BCGTV front-end as well, but doing so efficiently would likely require redoing many of the optimizations in Pantry.

Efficiency: BCGTV's primary overhead stems from the fact the ~1000-gate transition function sub-circuit must be repeated for *every* step of the RAM. Thus, their circuits are at least 1000 times larger than the runtime of the RAM. Pantry on the other hand requires expensive evaluations of a collision-resistant hash function (when supporting general random access to memory, logarithmically many evaluations of the hash function are required per memory access). However, hash function evaluations in Pantry are only required when memory is accessed, not for every step of the RAM. Thus, which of the two approaches (BCGTV vs. Pantry) generates smaller circuits may depend on how memory-intensive the computation is.

================

Summary:

Ideally, this post makes clear that each of the approaches to verifiable computation being pursued thus far achieves a different tradeoff between efficiency, expressiveness, and support for features such as public verifiability and zero knowledge properties. This diversity can only be a good thing as users will be able to choose the approach that best suits their needs.

In more detail, despite substantial progress, the fully general-purpose front-end still generates circuits that are at least three orders of magnitude larger than the runtime of original RAM computation. Moreover, existing back-ends based on argument systems with pre-processing impose an additional three orders of magnitude overhead for the prover. Arguments based on short PCPs will avoid pre-processing for the verifier, but impose additional overheads on the prover. Interactive proofs can avoid these large overheads for sufficiently structured computation, but require a front-end that generates small-depth deterministic circuits, and does not provide cryptographic properties like public verifiability and support for zero-knowledge achieved by some of the argument systems.

Two attractive directions for future work present themselves. One is to develop protocols and build systems that can verifiably execute general computations, but that automatically leverage structure within computations for efficiency gains. Alternatively, it may be better to develop a restricted programming framework analogous to MapReduce that still allows for the expression of a powerful class of computations and automatically "extracts" the structure necessary to verify the computation efficiently. If this approach is pursued, it will be critical to determine the right balance between the level of generality to support and the amount of structure to force upon computations for efficiency gains.

Acknowledgements: I am indebted to Mike Walfish and Andrew J. Blumberg for extensive conversations over several months that substantially influenced my understanding of the area and this post in particular, and to Mike Walfish and Michael Mitzenmacher for valuable feedback on an early draft of this post. Any errors are entirely my own.

Further resources: Bryan Parno hosted a session on verifiable computation at the Microsoft Research Faculty Summit, where he, Michael Mitzenmacher, and Mike Walfish spoke about several of the approaches discussed in this post. Video of the session is available, and Mike's slides contain a nice comparison of the various approaches.

Sunday, September 01, 2013

How Should History Affect Funding Decisions?

As I go through the reviews from my latest NSF proposal, there are general high-level comments related to my past work.  This leads to an interesting and perhaps-not-often-enough discussed issue -- how should the past influence present funding decisions?  Mikkel Thorup recently tried to open up discussion on this theme with a viewpoint piece in the Communications of the ACM.   (Sorry it's behind a firewall;  if someone has a direct link please post in comments.)

To be clear, I'm not sure what the current NSF policy is on "history", as I haven't served on an NSF panel for a while.  In the past I recall hearing that NSF panels were not supposed to take PI history into account in evaluating the proposals, although it seemed to be implied that the program manager would take that into consideration if needed.  Another country I do reviews for does take history into account in an a fairly direct way -- there are explicit questions regarding whether the applicant has demonstrated the qualifications necessary to carry out the proposed research project.  That's a fairly broad phrasing, that at least in my mind opens the way to discussing the PIs past work.  So in that case there is some weight put on past performance.

I admit that I think past performance is a perfectly reasonable criterion for judging funding proposals.  It helps to think of extreme cases.  For example, if Les Valiant wrote a proposal and I somehow was on the panel and didn't understand or appreciate it, history (and/or basic probability) would overwhelmingly suggest that the fault is with me.  Or even if one would assume or argue that the proposal wasn't well written, wouldn't the risk/reward calculation based on the track record argue strongly for funding?  At the other extreme, one must make sure that new faculty with limited track records obtain enough funding to have the chance to ignite, either through special programs (like CAREER grants) or with some understanding that new faculty should be given an appropriate share of the funding pie.

In the end, my mindset is that there is a difference between writing a compelling sounding proposal and being able to deliver the research results.  Looking at a researcher's past helps calibrate what the final outcome will be.  Not taking it into account seems inherently wrong, under standard success metrics.  However, it's hard to assign a number as to how much it should affect the overall (and arguably quite subjective) scoring of proposals in any given panel that decides grants.  I don't envy people at the NSF the job of trying to untangle this messy situation, particularly in settings where they feel others (say, up in the direction of Congress) are out to second-guess their decisions.

In terms of my current reviews, I thought the panel took into account my past history in mild and reasonable ways that I appreciated.  Unsurprisingly my proposal dealt with problems at the intersection of algorithms, networking, and information theory, and I discussed that I had a (successful) history of interacting with all of these communities.  The reviewers acknowledged this history and noted that my background, including previous research in hashing, coding theory, and so on would likely be relevant to solving the problems I proposed.  I don't know how much it played into the final decision, but I was glad they agreed with the argument that I had the right background.  I should note, though, that in the past I've written proposals where I was trying to "branch out" into areas that were newer to me (they did not -- gasp -- involve hashing, or coding theory....), and I believe I experienced the same effect in reverse.  So there's certainly an argument the other way...

The other way the past seemed to be taken into account was with regard to some of the "broader impacts".  This blog, for instance, seems to be seen as a "positive" in terms of community outreach.  Similarly, the fact that I have produced a body of educational materials over the years (class notes*, surveys, a textbook, assignments, etc.) was noted.  Again, I think history should be taken into account somehow here.  It's easy to say you're going to write a textbook or large-scale survey in a proposal, but it's another thing to have a record of delivering. 

How much should past performance count in reviewing proposals?  And can we actually influence the funding agencies to take past performance into account in ways we as a community find suitable?  I'm curious to see if people have thoughts they'll share in the comments.



* I'm off teaching the Computer Science 124, Algorithms and Data Structures, now for sabbatical and potentially for a while.  If anyone teaching this wants my materials, though, let me know;  I've often passed them along to faculty starting to teach that type of course. 

   


Wednesday, August 28, 2013

Back

I haven't posted for a while, primarily because I started by sabbatical by going to Copenhagen for most of August.  I was primarily visiting the IT University of Copenhagen, thanks to the outstanding hospitality of Rasmus Pagh.  I also saw Mikkel Thorup, who has started at the University of Copenhagen (which is different from the IT University of Copenhagen), and stopped by the MADALGO summer school.  Copenhagen is a wonderful city -- lots to do and see, but not overwhelming, and with great features like excellent public transportation and (for US people like me) a population where everyone seems to know English as a second language.  ITU has a wonderful building (which is pretty empty of students over the summer) -- I was there a few years back for an ESA conference, and I hear it's where ICALP will be in 2014.  I hear other conferences -- SWAT and SEA -- will also be in Copenhagen next year.  So now's your chance to plan to go to Copenhagen, and I heartily recommend it. 

It felt very productive workwise, which was good in getting the crust of 3 years of administration off in a friendly way.  Research and writing = fun.  The change of environment and the chance to work face to face with people was just what I needed.  I hope to talk about some of the output of the trip soon.

Other good news came in during the month, so it's time to thank some sponsors.  Eddie Kohler and I obtained a Google grant for systems-data structures work.  Much thanks to Google.  Eddie's just a couple doors down from my office (that I hope not to spend too much time in this year);  I'm looking forward to gaining more insight from him into what's important in practice. 

I also get to thank the NSF, who funded my grant proposal this year.  (Officially, that's now NSF CCF-1320231: AF: Small: Collaborative: Data Synchronization : Theory, Algorithms, and Practice.)  As always, while I may sometimes express disagreement with specific policies or issues raised by the NSF, I feel very fortunate that the NSF is around to fund academic research.  I depend on their funding to do my work, and I appreciate the work they do to make it possible.

The reviews I got on the proposal were very interesting, and I plan to discuss them a bit at a high level here in a future post, as it might be interesting or helpful to others.  In particular, the proposal was not universally loved -- there was criticism all around -- but the criticisms were I felt accurate, well thought out, and nicely presented.  While it's obviously a lot easier to say that since the proposal was eventually funded, it also made me feel that the process went well this time, and that I obtained useful feedback, both the positive type and the negative type.

Wednesday, August 14, 2013

Managing Your Advisor by Nick Feamster

I thought Nick Feamster's post on managing your advisor was a good read and good advice for new graduate students.  In fact, I'd recommend undergraduates read it too -- to get a better idea of how to manage their interactions with professors.

Sunday, July 28, 2013

In Sickness...

One of the more amusing conversations at the social part of the Microsoft faculty summit (initiated by the humorous commentary of Kevin Leyton-Brown) started with the idea that our support staff all have some number of sick days available to them, but faculty don't, because, I imagine, we're not supposed to get sick.  And certainly I (and I'm sure many faculty) can remember days where in less-than-perfect health we dragged ourselves in because it was our day to lecture.

Of course the truth is somewhat more complicated.  We don't have sick days, but generally professors have more flexible schedules.  We can generally schedule a doctor's appointment (or, sometimes more importantly, appointments for our children) during the week and leave work without having to check in or out.  I'm not sure I'd trade my flexibility for a number of sick days.  What we generally don't have, however, are plans for dealing with illness.  It's assumed that we'll be there when we're required.

After some number of years, this seems OK.  For my regular undergraduate class, I have complete lecture notes.  With a small amount of advance notice, I can have a grad student (or another faculty member) fill in for me, certainly for a lecture or if needed two.  My undergraduates classes are recorded;  in a real pinch, I could cancel class entirely, and get the video of the lecture from a previous class put online.  And sometimes, if the class is moving all right, it's OK just to cancel a class.  For my graduate class, there's more flexibility.  In the worst case, I could usually have a graduate student go in to lead a discussion on the reading or just talk about their own latest interesting work.  Other meetings or work just get pushed back or re-scheduled as needed, which I suppose is the same thing that happens with other professions.

Does anyone have other useful tips for faculty new or old for managing work while coping with temporary but non-trivial illnesses?

The conversation came back to me as I was sick most of this last week, and could not come in for several days.  This is summer, and I'm on sabbatical, so it was not as terrible as it would have been during the year.  But many of these issues arose.  Tuesday was a graduate student's defense that I was supposed to be at (and of course was hard to schedule);  I dragged myself in to work for it.  I believe that, if I hadn't been able to drive in, Harvard rules would have allowed me to "attend" by Skype or some other video service.  But that's obviously not the desired plan, and I admit I felt obliged to be there, having committed to going, despite feeling ill.  By Wednesday I was at the doctor's and getting antibiotics, and any meetings were either cancelled or moved to Skype/Google hangouts.  Electronic meetings while laying in bed are, for better or worse, now possible when sick.  An undergraduate doing summer research with me had set up a lunch with a group of students for me to talk to, and I had to cancel that, guiltily.

In the end the lack of sick days doesn't bother me.  Getting sick does, though.  I can't recommend it.  I feel fortunate I don't get sick that often -- maybe something about our line of work prevent us from getting sick frequently.




Friday, July 19, 2013

Recent Random Musings

1)  Last week a key came off my daughter's Apple Macbook keyboard.  One of my keys on my machine had also come loose -- the "E" would pop out several times a day (and just pop back in when pushed, but still, it was getting annoying).  I made an appointment at the local Apple store, came in, and they replaced the keys.  (I'm sure both machines are no longer under warranty.)   For my daughter's machine, she (in her increasingly independent pre-teen way) had actually tried to glue the key back on (which I did not know until the Apple employee pointed it out to me), which is precisely what they tell you not to do, and he spent a few minutes scraping and peeling the glue out to get the new key to fit.

No moral here, but just very nice above-and-beyond customer service from my local Apple store, so I wanted to commend them.  And remind people not to try to glue their keys back into the keyboard, although I expect most blog readers here don't need that reminder.

2)  In a fit of weakness, I said yes to something, and now I'm on the Science Board for the Santa Fe Institute.  I don't expect to advertise everything that comes up with them on this blog, but they do have a "short course" (link) on complex networks coming up in September in Austin.  I thought it worth mentioning because it seems to have a strong lineup of speakers.  Readers of this blog will most likely know of Aaron Clauset and Cris Moore, who are on the speaker list.  I also note Lauren Ancel Meyers as well -- I knew her from one of those summer programs when we were younger, and she's now the Director of the Division of Statistics and Scientific Computation and Professor of Integrative Biology as the University of Texas at Austin, known for (among other work) her work on modelling the spread of infectious diseases and implications for policies to prevent the spread of such diseases.  Anyhow, it seems like a good program.  (Note:  the course apparently costs money to attend.)

3)  The Microsoft faculty summit has a virtual version to see sessions or talks you want to see. 

Wednesday, July 17, 2013

MS Faculty Summit Day Two

Day Two of the summit started with a keynote shared between Peter Lee and Jeannette Wing, who now seem to be sharing heading up Microsoft research.  (Rick Rashid is stepping down from his Chief Research Officer role;  there was a nice tribute to him, with Ed Lazowska providing a very nice homage to his development of Microsoft Research over the past couple of decades.)  I'd say their talk was a bit "rah-rah" for Microsoft, but it was also quite "rah-rah" for basic research generally and its role in developing computer science, so I wouldn't want to complain.  (With their positive, enlightened view on research and the compelling way that they can describe and present it, perhaps those two should instead be heading up some of the large government programs in charge of sponsoring research.  Oh, wait...)  In particular, I'd be remiss if I didn't point out that Jeannette specifically called out (with several slides) the recent theory work on "interlacing families" by Adam Marcus, Dan Spielman, and Nikhil Srivastava (see links here, or discussion here from Nikhil Srivastava).  Nikhil works for MSR India, so this was an example of MSR-univeristy basic research collaboration.  (Of course, Dan was Nikhil's advisor at Yale, so one might hope for a bit more exotic an example, but still, it's a nice example of basic research MSR supports.)    

The afternoon had a demo session -- a room full of demos from various MSR groups.  There was good one on networking that I liked (can't find a pointer, if someone sends I'll update), but most seemed focused on visualization and human-computer interfaces.  A few with Kinect, and a pretty interesting one that was based on touch-screen-type technology but was focused on your feet.  (Sensors would be embedded in the floor.  It could tell who was who by what shoes they wore;  it could detect motions like tapping, kicking, even just weight shifts.)  With a screen embedded in the floor you could play "virtual soccer".  A further prelude to our eventual holodecks. 

The "spam" session was the most entertaining.  Saikat Guha of MSR presented their work on tracking ad-click bots and related ad fraud.  Part of the work was focused on determining how much fraud there was and where.  Using that, they can do things about it.  They found a particular type of malware that shadows a user, and when the user does a search but doesn't click on an ad, the malware wakes up and clicks on an ad, at most once per day.  The behavior then looks like a real user, so it's hard to catch;  on the other hand, clicking an ad once per day is itself a noticeable behavior...

But the most entertaining talk of all was, unsurprisingly, Stefan Savage, who was talking about the "economics of cybercrime" -- how they figure out the "money chain" of the companies sending the spam e-mail selling drugs and illegal software, how they think about the weak points in the "money chain" (it's the banks), and how they've worked to give this information to law enforcement so that law enforcement is better equipped to take down spammers engaged in illegal activity.  Needless to say, Microsoft is interested and involved -- they don't like pirated copies of their software being sold. 

Thanks to Microsoft for inviting me.

Red-eyes are a killer.  (They weren't 25 years ago.  I wonder what happened.) 

Tuesday, July 16, 2013

MS Faculty Summit Day One

Day one of the faculty summit was a great deal of fun.  Primarily, I found I enjoyed catching up with people -- I saw a number of past/current collaborators, as well as many people who I enjoy talking to but don't get to see often enough.  The summit is well attended -- several hundreds of people -- so there's plenty of people to see.

The morning keynote session included a large chunk of time with Bill Gates.  He talked briefly and took a large number of questions.  In his remarks, he spoke about the bright future he saw for software, particularly the potential in making large advances in big science problems.  We're able to do so much more now, we can be much more ambitious about what we can do.  Then he spoke about the areas the Bill Gates Foundation is focused on.  Education -- the Bill Gates Foundation funds the Khan Academy and several MOOC projects.  He sees MOOCs as a way of increasing personalization in education.  He talked about fighting disease, and in particular disease spread modeling.  He also talked about genetic modification of crops to improve disease resistance, drought resistance, and nutritional value, and finally digital microfinance tools.  There weren't any particularly controversial thoughts or questions;  the most memorable to me involved a question about the patent/copyright/IP system, which Bill Gates tried to strongly defend.

Our morning session on verification for cloud computing systems went well.  (All the sessions seemed to start a bit late, so we were a bit pressed for time.)  Bryan Parno and Michael Walfish gave excellent presentations, and I thought our talks fit nicely together in terms of giving a pretty complete picture of goings-on in the area.  The after-lunch session on the economics of computing was very good -- Dan Huttenlocher talked about the theory of badging (giving a model for how badges on things like StackExchange can motivate people to different behaviors, in both theory and practice), Muthu talked about a new advertising market he was interested in, and Eva Tardos talked about composable and efficient mechanisms (and the class of smooth mechanisms).  I thought both sessions were slightly underattended, though -- the bulk of the attendees seemed to be focused on machine learning and related work.  That must be where the "action" is these days.  

The evening offered a boat cruise -- and a heat wave throughout the US translates into beautiful sunny weather and quite comfortable temperatures for a boat cruise in Seattle.  The trip included great views of Mount Ranier, bridges rising to scoot out of our way, and apparently we boated by (one of?) Bill Gates's houses, although I missed that.  Lots of fun conversations (although punctuated by important issues -- how universities handle parental leave, how to handle cheating in classes, growth and hiring) and catching up on various goings-on.

Sunday, July 14, 2013

Sleeping in Seattle

Unless I'm asked not to, I'll try to live-blog a bit over the next few days from the Microsoft Faculty Summit.  It's my first time to this event.  I understand Bill Gates will give some sort of keynote tomorrow morning, and there are plenty of other interesting speakers, so I expect there will be stuff worth writing about.

I'll be going to present our work Justin Thaler's work (with others) on verification for cloud computing, on a panel covering recent verification work with Michael Walfish and Bryan Parno.  (Monday morning!  Please come by!)   I had suggested they'd be better off having Justin present rather than me, but something was muttered about a faculty summit being for "faculty", so I agreed to go.  I feel like Justin's work deserves the attention.  

I've managed to plan appropriately and have various theses and proposals to read and review on the long plane flight.  Indeed, I probably would have said no to these requests if I hadn't had a plane flight scheduled.  I don't write very well on flights, but reading is manageable (and even desirable, so I don't start mentally going "Are we there yet?" every few minutes, like my kids do verbally).  So I'll have something to do besides watching TV or a movie for six hours.  I do expect I'll be trying to say "no" to such requests more this year, so the flight gives me a chance to feel virtuous for hopefully at least a semester. 

If you're there, and you see me, please feel free to say hi.



Wednesday, July 03, 2013

Nice Work If You Can Get It.

Links regarding David Petraeus's offer from CUNY to teach a course for $200,000.  (Or, maybe, now $150,000.  Who knows.)

Gawker has a lot of info
A letter from a New York Assembyman to CUNY
Money has some nice info
Even the Chronicle of Higher Education weighs in.

Here's an interesting question for discussion and debate.  CUNY apparently is offering that this is OK because it won't be paid for by taxpayer money, but by gift money from a donor.  It seems like this approach -- getting gift money specifically for a celebrity lecturer -- has high risk for unseemly outcomes.  On the other hand, where exactly is the line? 

[A local example:  Harvard has a fairly new well-loved class on Science and Cooking, where celebrity chefs come in and give lectures, and I'm sure SEAS has gone out to raise money especially for innovative teaching such as this, if not for this class directly.  On the other hand, we don't pay the chefs 6-figure salaries.  In fact, I'm not even sure we pay them (or perhaps we pay a nominal honorarium) -- the chefs, from what I hear, are excited to come teach students.  So it's not the same sort of thing, but it perhaps gives ideas that the lines could get blurry here fairly quickly.]   

Saturday, June 29, 2013

And Thanks for All the Fish, Altavista Version

All sorts of news about the plug finally being pulled on Altavista, which I still have an attachment to, being partially the product myself of DEC.  Here's a nice eulogy.  There's a good basic history at wikipedia's Altavista page

The book The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture (mostly about Google, but covers other history as well) probably sums up Altavista's history as well as anything:
The mighty rise and fall with spectacular regularity int his business, and the pace of boom and bust only increased as the Internet took root in the mid-1990s.  Yet Altavista is remarkable for a number of reasons.  To borrow from the present, Altavista was the Google if its era.  In 1996, it was arguably the best and most-loved brand on the Web.  It presaged many of the current innovations and opportunities in search, from automatic language translation to audio and video search to clustering of results.  And as a business Altavista attempted -- and failed -- to go public three times in three short years under three different owners.  Possibly most instructive, Altavista was the product of a company that was an extraordinary success in its original business but ultimately failed because of hidebound management unwilling to drive by anything other than the rearview mirror. 

Friday, June 28, 2013

And Thanks For All the Fish

As my Area Administrator Tristen reminded me, "...today is officially your last day as my boss..."  Monday is July 1, which officially ends my term as Area Dean for Computer Science at Harvard.  The indefatigable David Parkes will be taking on the position.  (Thank you, David!  And my condolences!) 

While a ponderous exposition of all the wonderful things that have happened in Harvard CS is clearly called for, I'll try to keep it brief.  My main goal in taking the job was to turn our small group into a somewhat larger group, and I feel that has gone well.  We've hired five new excellent faculty over the last 3 years (Ryan Adams, Eddie Kohler, Jelani Nelson, Yaron Singer, and Stratos Idreos).  We've also done well in promotions, including multiple successful tenure cases, which was the other really important part of my job.  In other news, CS enrollments at Harvard are still booming, and while credit for that certainly belongs to others (a shout-out here to David Malan, who keeps bringing more and more students into our intro CS 50 course somehow), as Area Dean, I consider it my job to take credit for it.  (Similarly, while I'm at it, I'll take some credit for Les Valiant finally winning his long-deserved Turing award!)  Our faculty, who have always been friendly, cooperative, and worked together well continue to do so.  So I didn't break anything there (which is probably as good summary as any of my past three years).  My job was really to be a buffer with other administration so the rest of the faculty could go about their business being as great as they are.  And, as I've said, then taking a share of the credit for their greatness afterwards.  

I've already thanked all the faculty for putting up with me the last few years.  But special thanks goes to my Administator Tristen Dixey, who insists on calling me "boss" even though it's quite clearly more correct the other way around.  She makes CS at Harvard go.  And while all the faculty are always helpful, I very frequently leaned on the trio of Harry Lewis, Greg Morrisett, and Margo Seltzer for Area Dean advice, to make sure I didn't do anything too stupid.

Other thanks go to my graduate students -- both Zhenming Liu and Giorgos Zervas who previously graduated, and Justin Thaler this year -- for keeping me involved in (their) research.  (And all my other collaborators as well, but my students especially.)  Sorry you had to put up with me administrating while you were busy doing the work for graduating.   

It's hard to believe it's been three years.  I imagine someday I may find myself taking on another administrative position.  But for now, it's a nice feeling just to be done with this one.  




Sunday, June 23, 2013

How Should We Choose Students?

Some of my previous posts have led me to think about the following -- something I'm hoping to write a longer piece about in the near future.

In the past few weeks, at Harvard (and elsewhere) there have been reports about the "decline of the humanities".  (Whether these reports have any significant bearing in reality is not necessarily important to this post.)  But machine learning keeps getting better and better.  While we may never be able to predict the exact outcome for an individual student, statistically speaking, as the universities gather more data, they will get better at predicting, for example, what a student will major in.  Potentially, with the right sort of tracking, universities may be able to predict reasonably well what jobs students may go into -- heck, they may get a statistically meaningful prediction of their future net worth.*  In particular, if we wanted to choose students according to what they were going to major in, in order to keep the humanities supporters happy, we could;  while we can already kind of do that now (based on, for example, what student say they want to major in), we'll just keep getting better at it.

This will lead to all sorts of questions.  Or, perhaps better said, will make the questions that already to some extent exist more pronounced.  First, getting to to the humanities concern, how should we choose our students?  Should we have quotas by future major?  We could assign departments a permanent percentage (well, an "expected percentage") of the incoming graduates and accept students accordingly?  From some faculty members' and administrators' point of view, perhaps this makes sense;  we can guarantee a department size, and a suitable faculty/student ratio per department.  To me, it seems potentially disastrous, turning the university into a static entity, which perhaps would not in any sense limit any individual student in terms of what they want to study, but would create a less flexible global atmosphere.  Again, in some sense, this question exists today;  at least some people have responded to the "humanities crisis" by saying that how students are accepted should be changed (to give preference to humanities-interested students), but the question becomes an even more significant challenge once you assume you actually have very strong prediction methods that can allow you to select students in this way more accurately than has been the historical norm.   

Of course, going beyond the picayune issue of whether we should choose students according to what they might major in, there's the larger scale question of how we should choose students.  Indeed, this question lies at the heart of many an affirmative action lawsuit, with the "reverse affirmative action" side claiming that people of what I will call "white" descent are not admitted in favor of less qualified "non-white" students.  (The issue is obviously more complicated than this paragraph can do justice to;  for example, the issue of Asian American discrimination arises.)  In such discussions, one generally hears the term "merit" -- if only schools just took the top people according to merit and ignored race completely -- but what exactly is merit?  Legislators or judges seem to want some sort of formula (usually based on grades and or test scores -- except that, by studying their own big data, some at Google claim that "G.P.A.'s are worthless" for their hiring).  Let's suppose our machine learning tools are good enough to estimate merit quite accurately if we define the merit objective function for them.**  How should we define it?  One particularly intriguing question, is the "merit" of the class simply the sum of merits of the collected individuals -- in which case we should ignore things like what major they want to choose -- or is the merit of the sum different from the sum of the merits?  I have some of my own not-completely-worked-out ideas, but again, this seems worth writing a longer essay about to work through the possibilities and implications.  

A further interesting question that arises is what sort of information can and should universities gather about applicants, in order to make these predictions.  College applications already ask for a lot -- grades, lists of activities, essays, letters of recommendation, test scores, sometimes interviews.  Suppose, though, that we could much more clearly predict your "merit" as a future student by parsing your Facebook account, or better yet, your e-mail from the last 3 years.  Should we be able to ask for that?  Perhaps we can guarantee that our algorithms will return a score only and your actual e-mail will not be examined at all by any human beings.  Or, by the time we get to the point where our machine learning algorithms are ready for that data, privacy won't matter to anyone anyway, especially if providing access to the data is needed to get into their choice of school. 

In some sense, none of these questions are inherently new.  But they appear to become different in kind once you think about the power machine learning will give to systems that make decisions about things like who goes to what university.  While the university setting is arguably small, the themes seem quite large, and perhaps the university is the place where some of the thinking behind the larger themes needs to be taking place.  And taking place now, before the technology is here and being used without a lot of thought into how it really should be used.

* Obviously, there are countless other potentially more significant uses of machine learning technology.  But I work at a university, so this is what has come to mind recently.   

** As far as I know, the merit function for Harvard is not "how much will you or your family donate to Harvard in the future".  But it could be.  Even if we avoid the potential self-interest of universities, to what extent is net worth a suitable metric of merit?  I was an undergraduate at Harvard and am now a professor there;  Bill Gates was an undergraduate (who notoriously dropped out) and donated a large amount of money for the building I now work in, and apparently has had a few other successes.  Extreme cases, to be sure, but how would the merit objective function judge these outcomes?