Showing 1–12 of 592 results
“Big data,” as it has become known in business and information technology circles, has the potential to improve our knowledge about human behavior, and to help us gain insight into the ways in which we organize ourselves, our cultures, and our external and internal lives. Libraries stand at the center of the information world, both facilitating and contributing to this flood as well as helping to shape and channel it to specific purposes. But all technologies come with a price. Where the tool can serve a purpose, it can also change the user’s behavior to fit the purposes of the tool. Big Data Shocks: An Introduction to Big Data for Librarians and Information Professionals examines the roots of big data, the current climate and rising stars in this world. The book explores the issues raised by big data and discusses theoretical as well as practical approaches to managing information whose scope exists beyond the human scale. What’s at stake ultimately is the privacy of the people who support and use our libraries and the temptation for us to examine their behaviors. Such tension lies deep in the heart of our great library institutions. This book addresses these issues and many of the questions that arise from them, including: -What is our role as librarians within this new era of big data? -What are the impacts of new powerful technologies that track and analyze our behavior? -Do data aggregators know more about us and our patrons than we do? -How can librarians ethically balance the need to demonstrate learning and knowledge creation and privacy? -Do we become less private merely because we use a tool or is it because the tool has changed us? -What’s in store for us with the internet of things combining with data mining techniques? All of these questions and more are explored in this book
The third edition of Preserving Digital Materials provides a survey of the digital preservation landscape. This book is structured around four questions: 1. Why do we preserve digital materials? 2. What digital materials do we preserve? 3. How do we preserve digital materials? 4. How do we manage digital preservation? This is a concise handbook and reference for a wide range of stakeholders who need to understand how preservation works in the digital world. It notes the increasing importance of the role of new stakeholders and the general public in digital preservation. It can be used as both a textbook for teaching digital preservation and as a guide for the many stakeholders who engage in digital preservation. Its synthesis of current information, research, and perspectives about digital preservation from a wide range of sources across many areas of practice makes it of interest to all who are concerned with digital preservation. It will be of use to preservation administrators and managers, who want a professional reference text, information professionals, who wish to reflect on the issues that digital preservation raises in their professional practice, and students in the field of digital preservation.
How do infants learn a language? Why and how do languages evolve? How do we understand a sentence? This book explores these questions using recent computational models that shed new light on issues related to language and cognition. The chapters in this collection propose original analyses of specific problems and develop computational models that have been tested and evaluated on real data. Featuring contributions from a diverse group of experts, this interdisciplinary book bridges the gap between natural language processing and cognitive sciences. It is divided into three sections, focusing respectively on models of neural and cognitive processing, data driven methods, and social issues in language evolution. This book will be useful to any researcher and advanced student interested in the analysis of the links between the brain and the language faculty.
The power of big data in cybersecurity — Big data analytics for network forensics — Dynamic analytics-driven assessment of vulnerabilities and exploitation — Big data analytics for mobile app security — Machine unlearning: repairing learning models in adversarial — Environments — Cybersecurity training — Machine unlearning: repairing learning models in adversarial environments — Big data analytics for mobile app security — Security, privacy and trust in cloud computing: challenges and solutions — Cybersecurity in internet of things (IOT) — Data visualization for cyber security — Analyzing deviant socio-technical behaviors using social network analysis and cyber forensics-based methodologies — Security tools — Data and research initiatives for cybersecurity analysis
A new edition of a book, written in a humorous question-and-answer style, that shows how to implement and use an elegant little programming language for logic programming. The goal of this book is to show the beauty and elegance of relational programming, which captures the essence of logic programming. The book shows how to implement a relational programming language in Scheme, or in any other functional language, and demonstrates the remarkable flexibility of the resulting relational programs. As in the first edition, the pedagogical method is a series of questions and answers, which proceed with the characteristic humor that marked The Little Schemer and The Seasoned Schemer. Familiarity with a functional language or with the first five chapters of T he Little Schemer is assumed. For this second edition, the authors have greatly simplified the programming language used in the book, as well as the implementation of the language. In addition to revising the text extensively, and simplifying and revising the “Laws” and “Commandments,” they have added explicit “Translation” rules to ease translation of Scheme functions into relations.
What artificial intelligence can tell us about the mind and intelligent behavior. What can artificial intelligence teach us about the mind? If AI’s underlying concept is that thinking is a computational process, then how can computation illuminate thinking? It’s a timely question. AI is all the rage, and the buzziest AI buzz surrounds adaptive machine learning: computer systems that learn intelligent behavior from massive amounts of data. This is what powers a driverless car, for example. In this book, Hector Levesque shifts the conversation to “good old fashioned artificial intelligence,” which is based not on heaps of data but on understanding commonsense intelligence. This kind of artificial intelligence is equipped to handle situations that depart from previous patterns — as we do in real life, when, for example, we encounter a washed-out bridge or when the barista informs us there’s no more soy milk. Levesque considers the role of language in learning. He argues that a computer program that passes the famous Turing Test could be a mindless zombie, and he proposes another way to test for intelligence — the Winograd Schema Test, developed by Levesque and his colleagues. “If our goal is to understand intelligent behavior, we had better understand the difference between making it and faking it,” he observes. He identifies a possible mechanism behind common sense and the capacity to call on background knowledge: the ability to represent objects of thought symbolically. As AI migrates more and more into everyday life, we should worry if systems without common sense are making decisions where common sense is needed.
Learn about an information-theoretic approach to managing interference in future generation wireless networks. Focusing on cooperative schemes motivated by Coordinated Multi-Point (CoMP) technology, the book develops a robust theoretical framework for interference management that uses recent advancements in backhaul design, and practical pre-coding schemes based on local cooperation, to deliver the increased speed and reliability promised by interference alignment. Gain insight into how simple, zero-forcing pre-coding schemes are optimal in locally connected interference networks, and discover how significant rate gains can be obtained by making cell association decisions and allocating backhaul resources based on centralized (cloud) processing and knowledge of network topology. Providing a link between information-theoretic analyses and interference management schemes that are easy to implement, this is an invaluable resource for researchers, graduate students and practicing engineers in wireless communications.
Following the successful PCS Auction conducted by the US Federal Communications Commission in 1994, auctions have replaced traditional ways of allocating valuable radio spectrum, a key resource for any mobile telecommunications operator. Spectrum auctions have raised billions of dollars worldwide and have become a role model for market-based approaches in the public and private sectors. The design of spectrum auctions is a central application of game theory and auction theory due to its importance in industry and the theoretical challenges it presents. Several auction formats have been developed with different properties addressing fundamental questions about efficiently selling multiple objects to a group of buyers. This comprehensive handbook features classic papers and new contributions by international experts on all aspects of spectrum auction design, including pros and cons of different auctions and lessons learned from theory, experiments, and the field, providing a valuable resource for regulators, telecommunications professionals, consultants, and researchers.
This book presents the foundations of phylogeny estimation and technical material enabling researchers to develop improved computational methods.
A comprehensive account of both basic and advanced material in phylogeny estimation, focusing on computational and statistical issues. No background in biology or computer science is assumed, and there is minimal use of mathematical formulas, meaning that students from many disciplines, including biology, computer science, statistics, and applied mathematics, will find the text accessible. The mathematical and statistical foundations of phylogeny estimation are presented rigorously, following which more advanced material is covered. This includes substantial chapters on multi-locus phylogeny estimation, supertree methods, multiple sequence alignment techniques, and designing methods for large-scale phylogeny estimation. The author provides key analytical techniques to prove theoretical properties about methods, as well as addressing performance in practice for methods for estimating trees. Research problems requiring novel computational methods are also presented, so that graduate students and researchers from varying disciplines will be able to enter the broad and exciting field of computational phylogenetics.
Designed to provide students with the knowledge needed to protect computers and networks from increasingly sophisticated attacks, SECURITY AWARENESS: APPLYING PRACTICE SECURITY IN YOUR WORLD, Fifth Edition continues to present the same straightforward, practical information that has made previous editions so popular. For most students, practical computer security poses some daunting challenges: What type of attacks will antivirus software prevent? How do I set up a firewall? How can I test my computer to be sure that attackers cannot reach it through the Internet? When and how should I install Windows patches? This text is designed to help students understand the answers to these questions through a series of real-life user experiences. In addition, hands-on projects and case projects give students the opportunity to test their knowledge and apply what they have learned. SECURITY AWARENESS: APPLYING PRACTICE SECURITY IN YOUR WORLD, Fifth Edition contains up-to-date information on relevant topics such as protecting mobile devices and wireless local area networks. Important Notice: Media content referenced within the product description or the product text may not be available in the ebook version.
Scientific Python is a significant public domain alternative to expensive proprietary software packages. This book teaches from scratch everything the working scientist needs to know using copious, downloadable, useful and adaptable code snippets. Readers will discover how easy it is to implement and test non-trivial mathematical algorithms and will be guided through the many freely available add-on modules. A range of examples, relevant to many different fields, illustrate the language’s capabilities. The author also shows how to use pre-existing legacy code (usually in Fortran77) within the Python environment, thus avoiding the need to master the original code. In this new edition, several chapters have been re-written to reflect the IPython notebook style. With an extended index, an entirely new chapter discussing SymPy and a substantial increase in the number of code snippets, researchers and research students will be able to quickly acquire all the skills needed for using Python effectively.
Roughly inspired by the human brain, deep neural networks trained with large amounts of data can solve complex tasks with unprecedented accuracy. This practical book provides an end-to-end guide to TensorFlow, the leading open source software library that helps you build and train neural networks for computer vision, natural language processing (NLP), speech recognition, and general predictive analytics. Authors Tom Hope, Yehezkel Resheff, and Itay Lieder provide a hands-on approach to TensorFlow fundamentals for a broad technical audience–from data scientists and engineers to students and researchers. You’ll begin by working through some basic examples in TensorFlow before diving deeper into topics such as neural network architectures, TensorBoard visualization, TensorFlow abstraction libraries, and multithreaded input pipelines. Once you finish this book, you’ll know how to build and deploy production-ready deep learning systems in TensorFlow. Get up and running with TensorFlow, rapidly and painlessly Learn how to use TensorFlow to build deep learning models from the ground up Train popular deep learning models for computer vision and NLP Use extensive abstraction libraries to make development easier and faster Learn how to scale TensorFlow, and use clusters to distribute model training Deploy TensorFlow in a production setting