P2a. Example of Good and Bad Abstracts
- Copy the TITLE and full abstracts into a txt file
- Below each abstract, say why it's good or bad, and JUSTIFY your conclusion.
- PLEASE SUBMIT ASSIGNMENT IN MYSCHOOL.
- If you have any questions, please interact with the instructor before the deadline. FOR ALL COMMUNICATION WITH THE INSTRUCTOR [ thorisson - at - ru dot is ], PLEASE PUT “REM4” SOMEWHERE (anywhere) IN THE TITLE OF YOUR EMAIL, SO IT GETS CLASSIFIED CORRECTLY IN THE TEACHER'S INBOX. Thanks.
- This assignment is pass/fail - the instructor will judge when you have passed (until then you may be asked to find better examples).
Part A.
Find one example of what you consider to be a good scientific abstract and one that is bad. Any topic will do. You must provide good arguments (in a few sentences) for why you think each one is good/bad.
Use e.g. http://citeseer.ist.psu.edu/
Select two papers from computer science.
Part B.
Review abstract.
The Abstracts to be Reviewed
Why are these good/bad?
A face recognition system must recognize a face from a novel image despite the variations between images of the same face. A common approach to overcoming image variations because of changes in the illumination conditions is to use image representations that are relatively insensitive to these variations. Examples of such representations are edge maps, image intensity derivatives, and images convolved with 2D Gabor-like filters. Here we present an empirical study that evaluates the sensitivity of these representations to changes in illumination, as well as viewpoint and facial expression. Our findings indicated that none of the representations considered is sufficient by itself to overcome image variations because of a change in the direction of illumination. Similar results were obtained for changes due to viewpoint and expression. Image representations that emphasized the horizontal features were found to be less sensitive to changes in the direction of illumination. However, systems based only on such representations failed to recognize up to 20% of the faces in our database. Humans performed considerably better under the same conditions. We discuss possible reasons for this superior and alternative methods for overcoming illumination effects in recognition.
Affinely-Adjustable Robust Counterparts provide tractable alternatives to (two-stage) robust programs with arbitrary recourse. We apply them to robust network design with polyhedral demand uncertainty, introducing the affine routing principle. We compare the affine routing to the well-studied static and dynamic routing schemes for robust network design. All three schemes are embedded into the general framework of two-stage network design with recourse. It is shown that affine routing can be seen as a generalization of the widely used static routing still being tractable and providing cheaper solutions. We investigate properties on the demand polytope under which affine routings reduce to static routings and also develop conditions on the uncertainty set leading to dynamic routings being affine. We show however that affine routings suffer from the drawback that (even totally) dominated demand vectors are not necessarily supported by affine solutions. Uncertainty sets have to be designed accordingly. Finally, we present computational results on networks from SNDlib. We conclude that for these instances the optimal solutions based on affine routings tend to be as cheap as optimal network designs for dynamic routings. In this respect the affine routing principle can be used to approximate the cost for two-stage solutions with free recourse which are hard to compute.
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.128.703&rank=3
We show that learning and optimization methods can be used to solve a sorting problem dynamically, with better results than traditional selection of a single algorithm. By applying the Markov Decision Process and an experimentally devised cost function with Dynamic Programming to input of size 100 an hybrid algorithm (Insertionsort, Mergesort and Quicksort) performes better than Quicksort alone. The ideas presented here are possibly applicable to other domains, see (Lagoudakis & Littman 2000) and (Lagoudakis & Littman 2001).
http://web.mit.edu/~pucci/www/daniela_thesis.pdf
Tutoring systems for teaching introductory arificial intelligence are few and complex. This study is to reason for development of new and better tutoring systems within the field. My evaluation is based on examining the available solutions and weighing the complexity and quality of the individual tutoring software. The quality index shows that none of the available solutions are sufficient for introductory courses and a new solution must be written to fill the gap.
We give an overview of the LEDA platform for combinatorial and geometric computing and an account of its development. We discuss our motivation for building LEDA and to what extent we have reached our goals. We also discuss some recent theoretical developments. This paper contains no new technical material. It is intended as a guide to existing publications about the system. We refer the reader also to our web-pages for more information.
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.127.6527
The PVM system is a programming environment for the development and execution of large concurrent or parallel applications that consist of many interacting, but relatively independent, components. It is intended to operate on a collection of heterogeneous computing elements interconnected by one or more networks. The participating processors may be scalar machines, multiprocessors, or special-purpose computers, enabling application components to execute on the architecture most appropriate to the algorithm. PVM provides a straightforward and general interface that permits the description of various types of algorithms (and their interactions), while the underlying infrastructure permits the execution of applications on a virtual computing environment that supports multiple parallel computation models. PVM contains facilities for concurrent, sequential, or conditional execution of application components, is portable to a variety of architectures, and supports certain forms of error detection and recovery.
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.47.2880
I actually own a copy of Harold Jeffreys’s Theory of Probability but have only read small bits of it, most recently over a decade ago to confirm that, indeed, Jeffreys was not too proud to use a classical chi-squared p-value when he wanted to check the misfit of a model to data (Gelman, Meng and Stern, 2006). I do, however, feel that it is important to understand where our probability models come from, and I welcome the opportunity to use the present article by Robert, Chopin and Rousseau as a platform for further discussion of foundational issues. 2 In this brief discussion I will argue the following: (1) in thinking about prior distributions, we should go beyond Jeffreys’s principles and move toward weakly informative priors; (2) it is natural for those of us who work in social and computational sciences to favor complex models, contra Jeffreys’s preference for simplicity; and (3) a key generalization of Jeffreys’s ideas is to explicitly include model checking in the process of data analysis.
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.217.2021&rank=28
In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow propertiee of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimization. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use dominance frontiers, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.100.6361&rank=30
In the today world, security is required to transmit confidential information over the network. Security is also demanding in wide range of applications. Cryptographic algorithms play a vital role in providing the data security against malicious attacks. But on the other hand, they consume significant amount of computing resources like CPU time, memory, encryption time etc. We are having various types of encryption algorithms. Each algorithm is having its own advantages and disadvantages. According to our applications we have to choose a particular algorithm or set of algorithms. In this paper we reviewed some of the national and international research papers of last two decades.
http://innovativejournal.in/index.php/ajcsit/article/view/799
ABSTRACT: This paper describes LLVM (Low Level Virtual Machine), a compiler framework designed to support transparent, life-long program analysis and transformation for arbitrary programs, by providing high-level information to compiler transformations at compile-time, link-time, run-time, and in idle time between runs. LLVM defines a common, low-level code representation in Static Single Assignment (SSA) form, with several novel features: a simple, language-independent type-system that exposes the primitives commonly used to implement high-level language features; an instruction for typed address arithmetic; and a simple mechanism that can be used to implement the exception handling features of high-level languages (and setjmp/longjmp in C) uniformly and e?ciently. The LLVM compiler framework and code representation together provide a combination of key capabilities that are important for practical, lifelong analysis and transformation of programs. To our knowledge, no existing compilation approach provides all these capabilities. We describe the design of the LLVM representation and compiler framework, and evaluate the design in three ways: (a) the size and e?ectiveness of the representation, including the type information it provides; (b) compiler performance for several inter-procedural problems; and ( c) illustrative examples of the bene?ts LLVM provides for several challenging compiler problems. REASON: The above abstract is an example of a good abstract since it gives a very good idea as to what LLVM is and what it does to someone who is a computer scientist. It starts of by describing what LLVM is made for, then gives a brief description of the LLVM features. It provides just enough detail to give computer scientists an idea as to what LLVM is and its capabilities, this amount of detail should allow the reader to decide whether to read the paper further or not.
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.131.1243
Future teleconferencing may enhance communication between remote people by supporting non-verbal communication within an unconstrained space where people can move around and share the manipulation of artefacts. By linking walk-in displays with a Collaborative Virtual Environment (CVE) platform we are able to physically situate a distributed team in a spatially organised social and information context. We have found this to demonstrate unprecedented naturalness in the use of space and body during non-verbal communication and interaction with objects. However, relatively little is known about how people interact through this technology, especially while sharing the manipulation of objects. We observed people engaged in such a task while geographically separated across national boundaries. Our analysis is organised into collaborative scenarios, that each requires a distinct balance of social human communication with consistent shared manipulation of objects. Observational results suggest that walk-in displays do not suffer from some of the important drawbacks of other displays. Previous trials have shown that supporting natural non-verbal communication, along with responsive and consistent shared object manipulation, is hard to achieve. To better understand this problem, we take a close look at how the scenario impacts on the characteristics of event traffic. We conclude by suggesting how various strategies might reduce the consistency problem for particular scenarios.
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.109.4115&rank=2