Wednesday, November 27, 2019

Steve Jobs Biography and Legacy

Steve Jobs Biography and LegacySteve Jobs Biography and LegacyApple is one of the most well-known technology companies in the world, and its popularity continues to grow with each passing year. The name Steve Jobs has become practically synonymous with the company he was the CEO of Apple, which he co-founded in 1976. Jobs led an interesting and eventful life as an entrepreneur, inventor, and designer. Early Life Steve was born February 24, 1955, in San Francisco, California, and was adopted by Paul and Clara Jobs. He grew up with one sister, Patty. Paul Jobs was a machinist and fixed cars as a hobby. Jobs biological parents married and had another child, a daughter named Mona, and Steve knew nothing about his biological family until he was 27 years old. After graduating from high school in 1972, Jobs attended Reed College in Portland, Oregon, for two years. He dropped out to visit India and study Eastern religions in the summer of 1974. In 1975 Jobs joined a group known as the Ho mebrew Computer Club. One member, a technical whiz named Steve Wozniak, was trying to build a small computer. Jobs became fascinated with the marketing potential of such a computer. In 1976 he and Wozniak formed their company, funding it by selling Jobs Volkswagen bus and Wozniaks prized scientific calculator. They called their new venture Apple Computer Company. Founding Apple Jobs and Wozniak sold their first computer, the Apple I, generating almost $775,000 in sales. They redesigned their computer with the idea of selling it to individual users, and the Apple II went to market in 1977 with impressive first-year sales of $2.7 million. The companys sales grew to almost $200 million within three years. Jobs and Wozniak had opened an entirely new market- personal computers. In 1984 Apple introduced a revolutionary new model, the Macintosh. The on-screen display had small pictures called icons. To use the computer, the user pointed at an icon and clicked a button using a device ca lled a mouse. This process made the Macintosh very easy to use. The Macintosh did not sell well to businesses because it lacked features other personal computers had. The failure of the Macintosh signaled the beginning of Jobss initial downfall at Apple. He resigned in 1985, though he retained his title as chairman of its board of directors. Jobs soon hired some of his former employees to begin a new computer company called NeXT. In late 1988 the NeXT computer was introduced at a large galaveranstaltung event in San Francisco, aimed at the educational market. The product was very user-friendly and had a fast processing speed, excellent graphics displays, and an outstanding sound system. Despite the warm reception, however, the NeXT machine never caught on. It was too costly, had a black-and-white screen, and could not be linked to other computers or run common software. In 1986 Jobs purchased a small company called Pixar from filmmaker George Lucas. Pixar specialized in computer ani mation. Nine years later Pixar release Toy Story, a huge box office hit. Pixar later went on to make Toy Story 2 and A Bugs Life, which Disney distributed, and Monsters, Inc., among other hits. In 2006, Pixar merged with Disney, and as a result, Jobs became the largest shareholder of Disney stock. In December 1996, Apple purchased NeXT Software for over $400 million. After more than 10 years away from the company, Jobs returned to Apple as a part-time consultant to the chief executive officer (CEO). Back at Apple Over the next six years, Apple introduced several new products and marketing strategies. In November 1997 Jobs announced Apple would sell computers directly to users over the web and by telephone. The Apple Store became a runaway success. Within a week it was the third-largest e-commerce site on the Internet. In September 1997 Jobs was named interim CEO of Apple. In 1998 Jobs announced the release of the iMac, which featured powerful computing at an affordable price. Th e iBook was unveiled in July 1999. It includes Apples AirPort, a computer version of the cordless phone that would allow the user to surf the Internet wirelessly. In January 2000 Jobs unveiled Apples new Internet strategy. It included a group of Macintosh-only Internet-based applications. Jobs also announced that he was becoming the permanent CEO of Apple. Apple also became a leader in the digital music revolution, having sold over 110 million iPods and over three billion songs from its iTunes online store. Apple then entered the mobile phone market in 2007 with its revolutionary iPhone. Steve Jobs Final Years In 2003, Jobs was diagnosed with pancreatic cancer. Initially, he delayed surgery, wanting to treat his illness with holistic methods, but eventually had an operation to remove the tumor in 2004. The surgery was deemed successful, and in following years Jobs disclosed little else about his health. Jobs health started to decline noticeably in 2009. In January of that year, he announced a six-month leave of absence, and in April he underwent a liver transplant, after which his prognosis was called excellent. However, a year and a half after the transplant, Jobs took another medical leave of absence. He announced his formal resignation as CEO on August 24, 2011, but continued to work as chairman of the board until October 4, 2011, the day before his death. On October 5, Jobs died of complications related to his pancreatic cancer. He was 56 years old. Jobs Legacy Following Jobs death, there were outpourings of support across the tech community. He was, posthumously, the subject of a film, an authorized biography, and a number of other books. Although none of the works covering Jobs life depict him as a perfect man, on one thing they agree Steve Jobs was a genius, and he died too soon.

Friday, November 22, 2019

Making Robots That Think, Part 1

Making Robots That Think, Part 1 Making Robots That Think, Part 1 Making Robots That Think, Part 1Microsoft co-founder Paul Allen made headlines last month when he announced plans to invest a $125 million via from his nonprofit foundation in what he calls Project Alexandria, a multi-year effort to bring fundamental human knowledge to robotic and artificial intelligence (AI) systems.In short, Allen wants to create machines with common sense.To make real progress in AI, we have to overcome the big challenges in the area of common sense, he told The New York Times. UC Berkeley Robot Learning Lab (from left to right) Chelsea Finn, Pieter Abbeel, Trevor Darrell, and Sergey Levine. leumund UC BerkeleyIt was a splashy announcement for a technical functionality that researchers have been working on quietly for some time. Robotics has come a long way since the turn of the century, with hardware and software available that enable machines to complete a variety of complex tasks s uch as assembling products on an assembly line performing delicate medical work and working underwater, in outer space, and in other inhospitable environments. But limitations remain. Robots excel at repetitive, assignable tasks such as tightening a screw over and over, but dont yet work well in situations where they are forced to work alongside others or think and plan actions for themselves.Allens research aims to address this shortcoming by developing machines that can do more of the same mental exercises that humans can and using that newfound knowledge to build smarter, more adaptable robots.Learn How Artificial Intelligence Can Actually Help HumanityTo make real progress in AI, we have to overcome the big challenges in the area of common sense. Paul Allen, MicrosoftThat is just part of the solution. Robotics engineers are also working on systems that help robots think beyond what tasks they are pursuing on a day-to-day basis the work they have been programmed to do and inste ad develop the forethought they need to learn and adapt to new challenges, effectively picking up new skills on the fly, independent of what humans teach them.This functionality is the basis of work being done by Dr. Sergey Levine, an Assistant prof in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research focuses on the intersection between control and machine learning, developing, as he says, algorithms and techniques that can endow machines with the ability to autonomously acquire the skills for executing complex tasks.Levines most recent work in this area is focused on the concept of visual foresight, enabling machines to visualize their future actions so that they can figure out on their own what they need to do in situations that they have never experienced before. This is accomplished by using the robots cameras to visualize a set of movements, and then allowing their software to process those visual cues into ac tions that mimic those movements.What we were thinking about were some of the differences between how robots manipulate objects in their environment and how people do it, Dr. Levine says. A lot of the standard approaches to robotic manipulation involve, essentially, modeling the world, planning through that model, and then executing that plan using whatever control we happened to have.Part 2 looks at the different applications those advancements in robotic AI can target.Tim Sprinkle is an independent writer.

Thursday, November 21, 2019

Definition of Web Spidering and Web Crawlers

Definition of Web Spidering and Web CrawlersDefinition of Web Spidering and Web CrawlersSpiders are programs (or automated scripts) that crawl through the Web looking for data. Spiders travel through website URLs and can pull data from web pages like email addresses. Spiders also are used to feed information found on websites to search engines. Spiders, which are also referred to as web crawlers search the Web and not all are friendly in their intent. Spammers Spider Websites to Collect Information Google, Yahoo and other search engines are not the only ones interested in crawling websites so are scammers and spammers. Spiders and other automated tools are used by spammers to find email addresses (on the internet this practice is often referred to as harvesting) on websites and then use them to create spam lists. Spiders are also a tool used by search engines to find out more information about your website but left unchecked, a website without instructions (or, permissions) on h ow to crawl your site can present major information security risks. Spiders travel by following links, and they are very adept at finding links to databases, program files, and other information to which you may not want them to have access. Webmasters can view logs to see what spiders and other robots have visited their sites. This information helps webmasters know who is indexing their site, and how often. This information is useful because it allows webmasters to fine tune their SEO and update robot.txt files to prohibit certain robots from crawling their site in the future. Tips on Protecting Your Website From Unwanted Robot Crawlers There is a fairly simple way to keep unwanted crawlers out of your website. Even if you are not concerned about malicious spiders crawling your site (obfuscating email address will not protect you from most crawlers), you should still need to provide search engines with important instructions. All websites should have a file located in the root directory called a robots.txt file. This file allows you to instruct web crawlers where you want them to look to index pages (unless otherwise stated in a specific pages meta data to be no-indexed) if they are a search engine. Just as you can tell wanted crawlers where you want them to browse, you can also tell them where they may not go and even block specific crawlers from your entire website. It is important to bear in mind that a well put together robots.txt file will have tremendous value for search engines and could even be a key element in improving your websites performance, but some robot crawlers will still ignore your instructions. For this reason, it is important to keep all your software, plugins, and apps up to date at all times. Related Articles and Information Due to the prevalence of information harvesting used to nefarious (spam) purposes, legislation was passed in 2003 to make certain practices illegal. These consumer protection laws fall under the CAN-SPAM Ac t of 2003. It is important that you take the time to read up on the CAN-SPAM Act if your business engages in any mass mailing or information harvesting. You can find out more about anti-spam laws and how todeal with spammers, and what you as a business owner may not do, by reading the following articles CAN-SPAM Act 2003CAN-SPAM Act Rules for Nonprofits5 CAN-SPAM Rules Small Business Owners Need to Understand