User Tools

Site Tools


public:sc-t-701-rem4-18-1:rem4-18-approvedtitles

This is an old revision of the document!


<-BACK to REM4-18 MAIN


Approved Topics With Tentative Titles




Improving usability of medical imaging viewer for virtual reality

Tomáš Michalík

I have found an interesting paper http://ieeexplore.ieee.org/abstract/document/7892382/ (Links to an external site.)Links to an external site. on which I would like to build a topic for this writing.

In the paper mentioned above, there is described a method / a type of visualization of a volumetric data (in this case MRI). It displays a part of the data in a decomposed form (single voxels are displayed with gaps in between). Because of its computational expensive rendering there must be displayed only a data subset.

A possible improvement is based on using Leap Motion, hands tracking technology, instead of HTC Vive 3D controllers. My hypothesis is that using hand itself, without any controller in it, will enable user to interact with the environment with higher precision.

Why this may have a significant impact: increasing precision allows us to display the data smaller (with higher precision user still will be able to do the same task – e.g. selecting particular voxels) → the smaller the data on the screen the higher frame rate is possible (the most computationally expensive part is only rendering volumetric data not the scene around). This could be used for increasing frame rate or to display larger subset of the data with the same frame rate.








Comparing the Burrow-Wheeler Aligner (BWA) and the Bowtie 2 software for short read alingment

Antton Lamarca

Both the 'Burrow-Wheeler Alinger (BWA)' and 'Bowtie 2' are used to perform mapping of short DNA sequence reads against a known genome. However, the computational strategies used by each of them are different. As a result, their performances vary. For example, 'Bowtie 2' seems to be considerably faster.

There are other candidares as well, the comparison could involve other programs. The good thing about these software packeges is that as they use pretty much the same input data, the comparison of the outcome is quite intuitive.








Intrusions in the cloud - A comparison of three approaches

Guðný Lára Guðmundsdóttir

I have researched some more and found that there have been some Intrustion detection systems suggested and/or implement in papers. Would it make sense to choose two of those, explain and compare so the hypotheses could be that one does some thing better?

An intrusion would be classified as the process of entering into a network without proper authentication.

I found this very recent paper from 2017 that classifies intrusions detection techniques into 4:

  1. Misuse detection
  2. Anomaly based
  3. Virtual Machine Introspection
  4. Hyper visor based introspection

I thought I could either compare 2-4 of those or compare the techniques used in one of those.

This “doing better” could be experimented by doing various intrusions and checking to see whether the system detected the intrusion or not, simply compare how many intrusions were detected per system.








Does design-driven development lead to more effective software?

Julia Elisabeth Haidn

The paper would try to answer this question and also include a comparison of design-driven software development and established software development techniques for evaluating the concept of “more effective”.

The word “effective” could mean e.g.: customer acceptance, coverage of customer needs, generating revenue (—> making a product successful), reducing risk, increasing speed and learning.








Sustainability of Diets With Low Environmental Negative Impact

Matteo Altobelli

Recent studies show that the impact of different diets on the environment is not exactly as we are imagining it. A common place is, in fact, to think that the vegan diet (based on the exclusive consumption of products of plant origin) is absolutely the best choice. Recent studies involving large national and international producer countries (such as the United States, Australia, Spain, Italy, etc.) have shown, however, that the vegan diet is less sustainable than many other variants of a vegeterian diet and even of some of the omnivorous one, as the intensive use of fields for cultivation in addition to the lack of exploitation of grazing land (which are often not suitable for cultivation), would give the opportunity to feed fewer people causing, in the long run, greater exploitation environment.

Regarding the healthy aspect, the research propose a study on samples that vary in terms of age and habits, in order to analyze the effects of the diets in different conditions. While, for the environmental aspect, land of different environments will be used. These soils will be used for a fairly long period of time and in order to meet the food demand of a number of people suitable for the area used. In the long term, it will therefore be possible to collect the results of the research.

The experiment at the environmental level will proceed as follows: - the land will be cultivated following the canons of organic farming (very similar to the “classical rotation”), rather than those of an intensive agriculture. The reasons are simple. Although organic farming can have yields up to 20% less than conventional agricolture, the environmental benefits of the former make up for significantly lower yields. In fact, even though conventional agriculture produces more food, it does so at the expense of the environment: loss of biodiversity, environmental degradation and serious impacts on ecosystem services. On the other hand, organic farming tends to store more carbon in the soil, improving its quality and reducing erosion. This type of agriculture reduces soil and water pollution and also greenhouse gas emissions. And it is more energy efficient because it is not based on synthetic fertilizers and pesticides. - at each harvest the quantity and quality of food produced will be measured; different parameters for the evaluation of land exploitation (pH, salinity, acidity, alkalinity, mineral fraction, total limestone and active limestone, organic substance, nitrogen, phosphorus, potassium, magnesium, calcium, heavy metals, biological activity) will be analyzed to study the state of the environment. - in soils where there would be livestock (in cases of omnivorous diet), a portion of the land dedicated exclusively to rest and, therefore, to grazing livestock will be added to the classical rotation.

After a sufficiently long period (3 to 5 years), the results will be compared and published.








Deep Reinforcement Learning for Multiplayer Games

Guðmundur Páll Kjartansson

Reinforcement Learning has been very successful in creating AI that can learn to play games. It is a technique based on behavioural psychology, where actions that lead to good results are rewarded and those that lead to bad results are punished. One notable work on RL is the 2013 paper by Deep Mind, where they demonstrated a learning technique that could teach itself 7 different Atari games [1]. They combined a Convolutional neural network for analysing video input together with a Markov Decision Process based (stochastic) learning model. A remaining question is, how well suited is their approach for learning to play multiplayer games where you have to predict the behaviour of other agents? If not, can we modify their technique based on some previous work [2][3] on multiplayer games?

For this paper, I will choose Scorched Earth and Worms as the multiplayer games. These are both turn-based and seem very fitting for reinforcement learning. We will assume 3 players that are all competing against each other.

Experiments: 1. Constant vs time-dependent learning rate:

Compare the win/loss ratio of constant and time-dependent learning rates with the Q-learning algorithm. Here, time-dependent means that the learning rate of the algorithm will decrease in each iteration as a function of the number of iterations. The assumption is that after some fixed amount of iterations, time-dependent learning will outperform any tested constant learning rate.

2. loss-to-win ratio decreases exponentially with more training time

Simple experiment. Here we assume a certain relationship between training time of the Q-learning algorithm and loss-to-win ratio … keeping all other parameters fixed.

3. Competing agents vs paranoid:

This is a more game-theoretic idea. Compare the performance of a Q-learning model that assumes all opponents are playing to win, with another model that assumes that all opponents are teamed up against the learning agent.








Comparison of Increased Block Sizes and the Lightning Network - Improving Transaction Processing of Cryptocurrencies

Elías Ingi Elíasson

One of the bigger problems in the cryptocurrency-world is processing transactions. In order for transactions in cryptocurrency to go through and be approved, they have to go through a rather slow process. The transactions are assigned to a so-called block, which can only store a limited amount of transactions. In order for the transactions to be resolved they are checked for validity and compared with previous transactions and the users account balance. Additionally each block has to find a specific hash in order to create the next block that will store the next batch of transactions, creating a chain of blocks, the blockchain. The process of finding a hash usually takes about 10 minutes. Those working towards resolving the blocks are called miners, and are heavily rewarded for finding the correct hash. This delay as well as the limited transaction storage space in each block has resulted in people adding transaction/transfer fees to their transactions that is rewarded to the miners so that their transactions will be the ones selected to process sooner than others.

Numerous ideas have come up on how this delay can be shortened, both for the purposes of faster transactions and also so that people aren't “forced” to add a fee to their transaction in order for them to be processed. Amongst those ideas is doubling the size of the blocks, so that they can store more transactions, which would result in more transactions being processed for each hash. Another idea is using a bidirectional payment channel called the Lightning Network. The Lightning Network offers two individuals that frequently engage in cryptocurrency transactions to create a “transfer-channel” between each other seperated from the blockchain. An example of this is a person that purchases a cup of coffee for 1 coin every day at the same coffeehouse. He makes an initial deposit to the channel of 100 coins and after having purchased 100 cups of coffee the channel balance is depleted and a single transfer transaction is then assigned to a block instead of 100 smaller ones, resulting in less transactions per block.

Looking into how these ideas differ and what the benefits/drawbacks of each idea is (and maybe adding other ideas that I haven't looked into) I think would make for a good comparative experiment for finding an ideal method for processing cryptocurrency transactions with respect to processing speed, number of transactions and transfer fees.








/var/www/cadia.ru.is/wiki/data/attic/public/sc-t-701-rem4-18-1/rem4-18-approvedtitles.1517491803.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki