Continuing the construction of your own paper, write the related work section of your paper, and provide an initial list of references.
Keep in mind what has been said about related work and references, especially the “potato stamp” method!
Also note that these sections are likely to change as things move forward, but write these to your best ability at this time (short is best).
Please keep the number of references under 5 total. This section should not be longer than 66% of one page, 12 point font.
Submit via MySchool.
If you have any questions, please do not hesitate to contact me.
Web services are becoming an increasingly important part of the Internet
service infrastructure. Their reliance on XML and HTTP has raised concerns
on the overhead in message processing and transmission, especially in the
mobile wireless community. [KLT05]
Recent work on lazy parsing is a ma jor step towards alleviating this prob-
lem. However, lazy parsers must still read the entire XML document in order
to extract the overall document structure, due to the lack of internal naviga-
tion pointers inside XML documents. Further, these parsers must load and
parse the entire virtual document tree into memory during XML query pro-
cessing. These overheads signiﬁcantly degrade the performance of navigation
The problem described by Kangasharju is not only severe in the mobile
area, therefore a general alleviation of it is described by Farfán. We are
[FHR07] O FarfĞn, Vagelis Hristidis, and Ra ju Rangaswami. Beyond lazy
xml parsing 1, 2007.
[KLT05] Jaakko Kangasharju, Tancred Lindholm, and Sasu Tarkoma. S.:
Requirements and design for xml messaging in the mobile envi-
ronment. In In Anerousis, N., Kormentzas, G., eds.: Second In-
ternational Workshop on Next Generation Networking Midd leware.
(2005) 29Ð36, pages 29–36, 2005.
D. Beymer et al. in 1997 (D.Beymer et. al 1997) and B. Coifman et al. In 1998 (B.Coifman et. al.)
introduced tracking of vehicle features such as distinguishable lines or points on the vehicle as a
way to track individual vehicles. While this is a very robust way of tracking vehicles it places
several points on each vehicle increasing the computational intensity of the task. While this task can
be used very effectively to track traffic it is unable to determine whether a vehicle is applying it's
brakes without slowing down which has been empirically proven to be a cause of tailbacks. The fact
that this system can only determine deceleration or acceleration through a vehicles process across
the field of view also contributes to the computational intensity. This work, while being very useful
indeed, is therefore not a sufficient solution to the problem at hand.
In 2000 Shunsuke Kamijo et al. (S.Kamijo et al. 2000) introduced a method for tracking vehicles at
road intersections based on their shape. This method assumes the availability of a high vantage
point for a near top down view and therefore has limited application as well as range. The need for a
high point of view is because of the way the system recognises the shapes of vehicles. This
approach also requires high processing power because of the complexities of shape recognition.
Thus while this system is of considerable utility for intersection monitoring it's utility for highway
monitoring is very limited.
In 2003 Harini Veeraraghavan et al. (H.Veeraraghavan et al. 2003) introduced a method for tracking
vehicles at intersections which also relied on shape recognition. To produce reliable data this system
needs to run its camera data through several filters introducing increased computational complexity
with each pass. The fact that the system also was designed for intersection monitoring decreases it's
applicability in highway scenarios.
All of these systems have in common the need for illuminated roadways. This severely limits their
utility in rural areas where roadways are not lit. This is especially problematic in northern ter ritories
where through a substantial part of the year each day may have as little as one or two hours of
daylight. Our system is unaffected by ambient light conditions as it tracks points of light emitted by
the vehicles themselves and is therefore not reliant upon secondary light sources.
The concept of using Raw I/O in databases is not new, it was ﬁrst supported
on various types of Unix system, like AIX and was later introduced to Linux
and then even Windows.
Raw I/O versus cached I/O The so often asked question, which is better
for performance raw I/O or cached I/O, especially regarding databases. Though
raw I/O can reduce CPU use overall the context must be considered, it can
reduce CPU usage dramatically but it typically results in long elapsed times,
especially for small I/O requests. That is of course because it does not get
cached in memory. In contrast the cached I/O would have the small requests
cached and there for reduce the elapsed time. That is why, some can argue if
the overall performance of you database is really improved.
Raw I/O is when the process interacts with a physical device directly,
without the kernel’s brokerage. But is very hard to manage.. 
Direct I/O is the ﬁrst step in eliminating the overhead of copying data
ﬁrst from the ﬁle buﬀer cache and then into the application cache, that is the
Oracle cache layer. Direct I/O has very similar concept as Raw I/O, they both
bypass the operating system’s caching level, which reduces the CPU overhead.
But there is still features that impact performance.
IBM has introduced Concurrent I/O which they say should be nearly
as fast as using raw I/O and is to be 200% improvement from Direct I/O,
”The throughput of the database application under Concurrent I/O
was three times the throughput achieved with Direct I/O - a 200%
improvement - and lagged the performance on raw logical volumes
by only 8%. ”
Concurrent I/O has all the beneﬁts of the Direct I/O as for serialization of
 Danny Kalev: Raw Disk I/O, ITWorld October 12, 2001.
http://www.itworld.com/nl/lnx tip/10122001 (20. Sept, 2008).
 Sujatha Kashyap, Bret Olszewski, Richard Hendrickson: Improv-
ing Database Performance With AIX Concurrent I/O, IBM Corporation
In “Give Students a Clue”(Hansen, Bruce, & Harrison, 2007),
Hansen et al. acknowledge the lack of a pedagogy for demostrating
core concepts to undergraduate students in AI. Their solution however
is to create a specialized system “Glomus”1 based on the board-game
Clue2 . Where the success of Glomus is to use a portable language
(Java), it fails at being introductory on all the core concepts of AI.
In “Introductory AI Educational Resources on the Web”(Amant & Young, 2001), St. Amant et al. give good pointers for AI teaching software for iAI, like AILab. These resources however have become badly outdated, with AILab written in LISP in 1993-4 and the latest update in 1995. LISP is considered quaint by C.S. students today and should not be a language of preference.
In “Teaching Introductory Artiﬁcial Intelligence Using a Simple
Agent Framework” (Pantic et al., 2005), Pantic et al. write about the
need for a objectivist vs. constructivist software system to teach iAI.
Their aim is ﬁrst-year undergraduates. Their assumption that there is
a need for such a system is correct, but their conclusion to develop
yet another specialist system is not. They created a system written in
Java (for portability and ease of modiﬁcation by students) but limited
it to be a web-search agent rather than devise a solution with a wider
usage and tutoring base.
Amant, R. S., & Young, R. M. (2001). Introductory ai educational
resources on the web. Intelligence, Winter, 15-17.
Hansen, D. M., Bruce, J., & Harrison, D. (2007). Give students a
clue (No. 38).
Pantic, M., Zwitserloot, R., & Grootjans, R. J. (2005, August).
Teaching introductory artiﬁcial intelligence using a simple
agent framework. IEEE Transactions on Education, 48(3), 382-
Russell, S., & Norvig, P. (2003). Artiﬁcial intelligence, a modern
approach (Second Edition ed.; S. Russell & P. Norvig, Eds.).