This is an old revision of the document!
Assignment P5: Introduction
Example of an excellent Introduction:
Most user-facing software in the modern world requires constant maintenance. Software updates change programs in many ways, such as: adding new features, fixing bugs, patching security holes, improving performance, and cleaning up internal code. In theory, the obvious purpose of updating a piece of software should be to improve it in some way. There will always be trade-offs, but in general one would hope that software would get \emph{better} over time, not \emph{worse}. Unfortunately it is not clear whether this is actually happening. Our hypothesis is that in the real world, not only do most updates fail to make users happier, they actually \emph{reduce} happiness by frustrating users. Software developers do not currently have a way to objectively measure whether the updates they create actually make their users happier. Social media sites like Twitter and Facebook can provide a deluge of user feedback, but without a method for analyzing the fire-hose of data it can be difficult to tell how users view changes. Without a concrete method for analyzing how their work affects their users programmers can't know whether their efforts having the desired effect, and they can't refine their development practices over time to improve their results. In Section~\ref{sec:sampling} we describe a method for collecting and filtering social media data (from Twitter) to build a corpus of user feedback on a particular software product for a specific period of time. This builds on similar work by ABC (ABC 2011), with several improvements specific to software-related content. We also briefly describe the collection of update release dates for several software products (a straightforward task). We then show how to analyze this corpus for sentiment in Section~\ref{sec:analysis}. This is mostly straightforward NLP sentiment analysis, combining methods by ABC (ABC 2011) and XYZ (XYZ 2010). In Section~\ref{sec:results} we provide plots of sentiment and release dates for a number of popular user-facing software programs. We correlate sentiment over time with release dates of products using methods described by AAA (AAA 1990) to determine whether our hypothesis holds and release dates are statistically significant predictors of immediate negative changes in sentiment. We give a more thorough review of the related work we have built on in Section~\ref{sec:related-work}. Finally we close with Section~\ref{sec:conclusion} where we review the implications of our results and suggest further opportunities for study.
- Very clear motivation
- Very clear relationship to prior work (remains to be filled in, however - but idea is perfectly exemplified) while giving only a …
- … limited number of references (three - good number, and good places to give them), exactly the way this should be in the Intro.
- Length of Intro just right.
- No repeating verbatim text from the Abstract.
- Perhaps a bit too much to have three paragraphs relating to the structure of the paper, but very well done and forgivable.
- Simply works!
Another example of a great Introduction: