User Tools

Site Tools


Assignment P5: Introduction

Example of a good Introduction:


Analyzing Trends in User Attitude Towards Real-World Software: Do Users Hate Change?


Today´s software rarely stays the same for very long. Programs are constantly updated to fix bugs, add features, and clean up code. One of the main goals of user-facing software should be to make its users happy, so it seems obvious that each update of a program should improve the software and make its users happier. However, in the real world software updates often seem to be accompanied by user complaints of needless change and new bugs. For developers it can be difficult to tell whether these complaints come from a vocal minority or if they reflect the general feelings of users. Much previous work has been done on mining social media to measure sentiment on specific topics (like tobacco products) and to track events over time (such as influenza cases). However, to our knowledge no one has analyzed a large amount of social media data to determine if ongoing development of popular software products is actually making users happier. In this study we sampled a large amount of historical Twitter data about user-facing software such as Firefox, Tweetbot, and KDE. We used sentiment analysis to gauge the happiness of users over time, and compared this with dates of product updates to test whether software updates affect the feelings of users. The results are mostly disheartening: we found statistically significant negative spikes in sentiment shortly after releases for the majority of projects. The projects whose updates did not cause negative sentiment spikes may warrant further investigation to uncover lessons about how to craft software that avoids angering its users.


Most user-facing software in the modern world requires constant maintenance. Software updates change programs in many ways, such as: adding new features, fixing bugs, patching security holes, improving performance, and cleaning up internal code. In theory, the obvious purpose of updating a piece of software should be to improve it in some way. There will always be trade-offs, but in general one would hope that software would get \emph{better} over time, not \emph{worse}. Unfortunately it is not clear whether this is actually happening. Our hypothesis is that in the real world, not only do most updates fail to make users happier, they actually \emph{reduce} happiness by frustrating users. Software developers do not currently have a way to objectively measure whether the updates they create actually make their users happier. Social media sites like Twitter and Facebook can provide a deluge of user feedback, but without a method for analyzing the fire-hose of data it can be difficult to tell how users view changes. Without a concrete method for analyzing how their work affects their users programmers can't know whether their efforts having the desired effect, and they can't refine their development practices over time to improve their results. In Section~\ref{sec:sampling} we describe a method for collecting and filtering social media data (from Twitter) to build a corpus of user feedback on a particular software product for a specific period of time. This builds on similar work by ABC (ABC 2011), with several improvements specific to software-related content. We also briefly describe the collection of update release dates for several software products (a straightforward task). We then show how to analyze this corpus for sentiment in Section~\ref{sec:analysis}. This is mostly straightforward NLP sentiment analysis, combining methods by ABC (ABC 2011) and XYZ (XYZ 2010). In Section~\ref{sec:results} we provide plots of sentiment and release dates for a number of popular user-facing software programs. We correlate sentiment over time with release dates of products using methods described by AAA (AAA 1990) to determine whether our hypothesis holds and release dates are statistically significant predictors of immediate negative changes in sentiment. We give a more thorough review of the related work we have built on in Section~\ref{sec:related-work}. Finally we close with Section~\ref{sec:conclusion} where we review the implications of our results and suggest further opportunities for study.

  • Very clear motivation
  • Very clear relationship to prior work (remains to be filled in, however - but idea is perfectly exemplified) while giving only a …
  • … limited number of references (three - good number, and good places to give them), exactly the way this should be in the Intro.
  • Length of Intro just right.
  • A bit too similar to the Abstract.
  • Perhaps a bit too much to have three paragraphs relating to the structure of the paper, but very well done and forgivable.

Another example of a great Introduction:

One of the major questions in the study of combinatorics of permutations is the enumeration problem: Given a set of permutations characterized by some interesting property, can we count how many permutations of length $n$ are in the set? A concrete example of this concerns sorting with a stack: How many permutations of length $n$ can be sorted by one pass through a stack? Knuth \cite{knuth1968art} showed that these are precisely the permutations that avoid the permutation pattern $231$, and this class of permutations is enumerated by the Catalan numbers. The ability to derive enumerations for arbitrary permutation sets would be immensely useful as a tool in combinatorics. Unfortunately, answering the enumeration problem is hard in general. A few algorithms have been developed to answer this question for a given set of permutations, but usually they need some structural information about the permutation set. These algorithms then output either some type of expression for the enumeration, often a generating function, or a different representation of the permutation set from which it could be easier to derive the enumeration, perhaps using existing tools and techniques. One example that influenced our contribution is the \BiSC{} algorithm by Magnusson and Ulfarsson \cite{magnusson2012algorithms}. It takes as input a finite sample of a permutation set, and outputs conjectures about the structure of the permutation set in terms of avoidance of mesh patterns. The \BiSC{} algorithm has been used to automatically conjecture statements of known theorems such as the descriptions of stack-sortable and West-2-stack-sortable permutations, smooth and forest-like permutations, and simsun permutations. In this paper we present a new kind of description for permutation sets called \textit{generating rules}. They have the advantage that, once a description of a permutation set in terms of generating rules has been established, deriving a generating function for the enumeration is often trivial. We then present the \PermStruct{} algorithm. As input it takes a finite sample of a permutation set, and, if successful, it outputs a description of the permutation sets in terms of generating rules. The structure of this paper is as follows: In Section~\ref{sec:relwork} we present related work. In Section~\ref{sec:genrules} we present the generating rules used by \PermStruct{}. In Section~\ref{sec:permstruct} we describe The \PermStruct{} Algorithm, and we evaluate its effectiveness in Section~\ref{sec:evaluation}. Finally, we give concluding remarks in Section~\ref{sec:conclusion}.

  • Clear motivation
  • Clear and highlighted relationship to prior work (remains to be filled in, however - but idea is perfectly exemplified) while giving only a …
  • … limited number of references, exactly as it should be in the Intro.
  • Length of Intro just right.
/var/www/ailab/WWW/wiki/data/pages/public/rem4/rem4-15/review_of_p5.txt · Last modified: 2015/09/28 11:23 by thorisson2