Herewith the Hottest Seminar Topic for 2017 Computer Science
Computer Science Seminar
The evolution and development of mankind began thousands and thousands of years before. And today our intelligence, our brain is a resultant of this long developmental phase. Technology also has been on the path of development since when man appeared. It is man that gave technology its present form. But today, technology is entering a phase where it will outwit man in intelligence as well as efficiency. Man has now to find a way in which he can keep in pace with technology, and one of the recent developments in this regard, is the brain chip implants. Brain chips are made with a view to enhance the memory of human beings, to help paralyzed patients, and are also intended to serve military purposes. It is likely that implantable computer chips acting as sensors, or actuators, may soon assist not only failing memory, but even bestow fluency in a new language, or enable "recognition" of previously unmet individuals. The progress already made in therapeutic devices, in prosthetics and in computer science indicates that it may well be feasible to develop direct interfaces between the brain and computers. This technology is only under developmental phase, although many implants have already been made on the human brain for experimental purposes. Brain chips are made with a view to enhance the memory of human beings, to help paralyzed patients, and are also intended to serve military purposes. It is likely that implantable computer chips acting as sensors, or actuators, may soon assist not only failing memory, but even bestow fluency in a new language, or enable "recognition" of previously unmet individuals. The progress already made in therapeutic devices, in prosthetics and in computer science indicates that it may well be feasible to develop direct interfaces between the brain and computers. This technology is only under developmental phase, although many implants have already been made on the human brain for experimental purposes.
Cybercrime involves using computers and Internet by individuals to commit crime. Cyber terrorism, identity theft and spam are identified as types of cybercrimes. The study identified some of the causes of cybercrimes to include urbanization, unemployment and weak implementation of cybercrime laws. The effects of cybercrimes on organizations, the society and the country in general include reducing the competitive edge of organizations, waste of production time and damage to the image of the country. With Nigeria venturing into cashless society, there is a need for cybercrimes menace to be minimized if not completely eradicated. Some of the ways of combating such crimes include taking reasonable steps to protect ones property by ensuring that firms protect their IT infrastructure like Networks and computer systems; government should assure that cybercrime laws are formulated and strictly adhered to and individuals should observe simple rules by ensuring antivirus protection on their computer systems.
ABSTRACT Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems. Data mining tools can answer business questions that traditionally were too time consuming to resolve. Generally data mining contains several algorithms and techniques for picking out interesting patterns from large data sets. Data mining techniques are classified into two categories: supervised learning and unsupervised learning. In supervised learning, a model is built prior to the analysis. We then apply the algorithm to the data in order to estimate the parameters of the model. Classification, Decision Tree, Bayesian Classification, Neural Networks, Association Rule Mining etc. are common examples of supervised learning. In unsupervised learning, we do not create a model or hypothesis prior to the analysis. We just apply the algorithm directly to the dataset and observe the results. Then a model can be created on the basis of the obtained results. Clustering is one of the examples of unsupervised learning. Various data mining techniques such as Classification, Decision Tree, Bayesian Classification, Neural Networks, Clustering, Association Rule Mining, Prediction, Time Series Analysis, Sequential Pattern and Genetic Algorithm and Nearest Neighbors have been used for knowledge discovery from large data sets.click here to download it
With the rapid growth of Internet and networks technique, multimedia data transforming and sharing is common to many people. Multimedia data is easily copied and modified, so necessity for copyright protection is increasing. It is the imperceptible marking of multimedia data to "brand" ownership. Digital watermarking has been proposed as technique for copyright protection of multimedia data. Digital watermarking invisibly embeds copyright information into multimedia data. Thus, digital watermarking has been used for copyright protection, finger printing, copy protection and broadcast monitoring. Indeed, a watermarking algorithm requires both invisibility and robustness, which exist in a trade-off relation. Thus good watermarking algorithm must be satisfied the requirements. The process of digital watermarking involves the modification of the original multimedia data to embed a watermark containing key information such as authentication or copyright codes. The embedding method must leave the original data perceptually un-changed, yet should impose modifications which can be detected by using an appropriate extraction algorithm. Common types of signals to watermark are images, music clips and digital video. The application of digital watermarking to still images is concentrated here. The major technical challenge is to design a highly robust digital watermarking technique, which discourages copyright infringement by making the process of watermarking removal tedious and costly. The advent of the Internet has resulted in many new opportunities for the creation and delivery of content in digital form. Applications include electronic advertising, real-time video and audio delivery, digital repositories and libraries, and Web publishing. An important issue that arises in these applications is the protection of the rights of all participants. It has been recognized for quite some time that current copyright laws are inadequate for dealing with digital data. This has led to an interest towards developing new copy deterrence and protection mechanisms. One such effort that has been attracting increasing interest is based on digital watermarking techniques.
Mind reading is a way to detect or infer the other’s mental states. The simplest way for mind reading can be done by simply seeing and understanding the facial expression. For example a smile can give us an expression of happiness. But now it may be possible that not only one human can understand other’s mental states but also a computer might understand the mental states of a person. This paper describes the ways how a computer might infer the mental state of a person and thus becomes the mind reading computer. It emphasize on the ways by which a computer might infer the mental state, one method is by facial expression analysis (FEA) and the second one by using a futuristic headband. Mind reading computer infers the thoughts of a human being based on various technologies for example by scanning the facial expressions along with head gestures and by identifying the volume and oxygen level inside the blood which is flowing in the vicinity of the brain. With exponential development in the technologies we can say that in future we might have number of technologies that will lead into making the flawless mind reading computer.CLICK HERE TO DOWNLOAD IT
ABSTRACT Polymers are organic materials consisting of long chains of single molecules. Polymers are highly adaptable materials, suitable for myriad applications. Imagine a time when your mobile will be your virtual assistant and will need far more than the 8k and 16k memory that it has today, or a world where laptops require gigabytes of memory because of the impact of convergence on the very nature of computing. How much space would your laptop need to carry all that memory capacity? Not much, if Intel's project with Thin Film Electronics ASA (TFE) of Sweden works according to plan. TFE's idea is to use polymer memory modules rather than silicon-based memory modules, and what's more it's going to use architecture that is quite different from silicon-based modules. Polymers are organic materials consisting of long chains of single molecules. Polymers are highly adaptable materials, suitable for myriad applications. Until the 1970s and the work of Nobel laureates Alan J. Heeger, Alan G. MacDiarmid and Hideki Shirakawa, polymers were only considered to be insulators. Heeger et al showed that polymers could be conductive. Electrons were removed, or introduced, into a polymer consisting of alternately single and double bonds between the carbon atoms. As these holes or extra electrons are able to move along the molecule, the structure becomes electrically conductive.CLICK HERE TO DOWNLOAD IT
The concept of smart environments evolves from the definition of ubiquitous computing that, according to Mark Weiser, promotes the ideas of "a physical world that is richly and invisibly interwoven with sensors, actuators, displays, and computational elements, embedded seamlessly in the everyday objects of our lives, and connected through a continuous network. “Smart environments are envisioned as the byproduct of pervasive computing and the availability of cheap computing power, making human interaction with the system a pleasant experience. In the influential article “The Computer for the 21st Century”, Mark Weiser created a vision of omnipresent computers that would serve people in their everyday lives at home and work, functioning invisibly and unobtrusively in the background and freeing them from tedious routine tasks. While Weiser’s basic principles – augmenting everyday artifacts with computation, sensing, and communication abilities; and using context to anticipate the user’s goals and intentions have been the subject of much research in the past years, the implications of comprehensively deploying such technology in society are much less understood. With its orientation towards the public as well as the private, the personal as well as the commercial, ubiquitous computing aspires to create technology that will accompany us throughout our whole lives, day in and day out. While developments in information technology never had the explicit goal of changing society, but rather did so as a side effect, the visions associated with ubiquitous computing expressly propose to transform the world and our society by fully computerizing it. In an ideal pervasive computing environment, a large number of connected smart devices are deployed to collaboratively provision seamless services to users. Pervasive computing is enabled by various advanced technologies, particularly wireless technologies and the Internet. It has become a trend for our future lives. A pervasive computing environment can be extremely heterogeneous. We can imagine how many different devices are involved in a smart home: TVs, phones, cameras, coffee makers, or even books and bookshelves. Since these devices are smart and communicate with each other mainly via wireless links, security must be ensured.CLICK HERE TO DOWNLOAD IT smart_environments
ABSTRACT Trends in VLSI technology scaling demand that future computing devices be narrowly focused to achieve high performance and high efficiency, yet also target the high volumes and low costs of widely applicable general purpose designs. To address these conflicting requirements, we propose a modular reconfigurable architecture called Smart Memories, targeted at computing needs in the 0.1mm technology generation. A Smart Memories chip is made up of many processing tiles, each containing local memory, local interconnect, and a processor core. For efficient computation under a wide class of possible applications, the memories, the wires, and the computational model can all be altered to match the applications. To show the applicability of this design, two very different machines at opposite ends of the architectural spectrum, the Imagine stream processor and the Hydra speculative multiprocessor, are mapped onto the Smart Memories computing substrate. Simulations of the mappings show that the Smart Memories architecture can successfully map these architectures with only modest performance degradation.
A Smart Memories chip is made up of many processing tiles, each containing local memory, local interconnect, and a processor core. For efficient computation under a wide class of possible applications, the memories, the wires, and the computational model can all be altered to match the applications. To show the applicability of this design, two very different machines at opposite ends of the architectural spectrum, the Imagine stream processor and the Hydra speculative multiprocessor, are mapped onto the Smart Memories computing substrate. Simulations of the mappings show that the Smart Memories architecture can successfully map these architectures with only modest performance degradation.CLICK HERE TO DOWNLOAD IT smart_memories
Wearable health monitoring systems integrated into a telemedicine system are novel information technology that will be able to support early detection of abnormal conditions and prevention of its serious consequences. Many patients can benefit from continuous monitoring as a part of a diagnostic procedure, optimal maintenance of a chronic condition or during supervised recovery from an acute event or surgical procedure. Important limitations for wider acceptance of the existing systems for continuous monitoring are: a) unwieldy wires between sensors and a processing unit, b) lack of system integration of individual sensors, c) interference on a wireless communication channel shared by multiple devices, and d) nonexistent support for massive data collection and knowledge discovery. Traditionally, personal medical monitoring systems, such as Holter monitors, have been used only to collect data for off-line processing. Systems with multiple sensors for physical rehabilitation feature unwieldy wires between electrodes and the monitoring system. These wires may limit the patient's activity and level of comfort and thus negatively influence the measured results. A wearable health-monitoring device using a Personal Area Network (PAN) or Body Area Network (BAN) can be integrated into a user's clothing. This system organization, however, is unsuitable for lengthy, continuous monitoring, particularly during normal activity, intensive training or computer-assisted rehabilitation. Recent technology advances in wireless networking, micro-fabrication, and integration of physical sensors, embedded micro-controllers and radio interfaces on a single chip, promise a new generation of wireless sensors suitable for many applications.CLICK HERE TO DOWNLOAD IT Wearable_computing