104 stories
·
21 followers

Pirate MEP Proposes Major Reform of EU Copyright

1 Comment and 2 Shares

The idea of copyright is certainly not new and most countries worldwide have developed complex systems to ensure that it’s upheld, ostensibly to protect the rights of creators.

But with the unprecedented advancement of communications technology, especially in respect of the Internet, copyright frameworks often appear terribly outdated and unfit for purpose.

In 2015 the EU has its collective eyes on copyright reform and to this end has appointed an individual whose political party has more focus than most on the world of copyright.

Last November, Julia Reda, a politician for the German Pirate Party and member of the European Parliament, was tasked with producing a report on the implementation of the 2001 InfoSoc Directive.

Having already presented her plans during a meeting of the Legal Affairs Committee in December, this morning Reda released a first draft of her report. It will come as no surprise that need for reform has been underlined.

“Although the directive was meant to adapt copyright to the digital age, in reality it is blocking the exchange of knowledge and culture across borders today,” Reda’s core finding reads.

The report draws on responses to a public consultation and lays out a reform agenda for the overhaul of EU copyright. It finds that the EU would benefit from a copyright mechanism that not only protects past works, but also encourages future creation and the unlocking of a pan-European cultural market.

reda-pic“The EU copyright directive was written in 2001, in a time before YouTube or Facebook. Although it was meant to adapt copyright to the digital age, in reality it is blocking the exchange of knowledge and culture across borders today“, Reda explains.

“We need a common European copyright that safeguards fundamental rights and makes it easier to offer innovative online services in the entire European Union.”

The draft (pdf) acknowledges the need for artistic works to be protected under law and calls for improvements in the positions of authors and performers “in relation to other rightholders and intermediaries.”

The document recommends that public sector information should be exempt from copyright protection and calls on the Commission to safeguard public domain works while recognizing rightsholders’ freedom to “voluntarily relinquish their rights and dedicate their works to the public domain.”

Copyright lengths are also tackled by Reda, who calls on the Commission to harmonize the term to a duration that does not exceed the current international standards set out in the Berne Convention.

On Internet hyperlinking the report requests that citizens are allowed to freely link from one resource to another and calls on the EU legislator “to clarify that reference to works by means of a hyperlink is not subject to exclusive rights, as it is does not consist in a communication to a new public.”

The document also calls for new copyright exceptions to be granted for research and educational purposes to not only cover educational establishments, but “any kind of educational and research activities,
including non-formal education.”

Also of interest is Reda’s approach to transparency. Since being appointed, Reda says she’s received 86 meeting requests from lobbyists. As can be seen from the chart below, requests increased noticeably after the Pirate was named as rapporteur in November 2014.

graph-reda

“I did my best to balance out the attention paid to various interest groups. Most requests came from publishers, distributors, collective rights organizations, service providers and intermediaries (57% altogether), while it was more difficult to get directly to the group most often referred to in public debate: The authors,” Reda explains.

“The results of the copyright consultation with many authors’ responses demonstrate that the interests of collecting societies and individual authors can differ significantly.”

Reda has published a full list of meetings that took place. It includes companies such as Disney and Google, and ‘user’ groups such as the Free Software Foundation Europe.

“Tomorrow morning around 9 I’m going to publish my report on EU #copyright, discussion in legal affairs committee on Tuesday,” Reda reported a few minutes ago.

The final report will be put to an April vote in the Legal Affairs Committee and then to a vote before the entire Parliament during May.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Read the whole story
marcell
3591 days ago
reply
"“Although the directive was meant to adapt copyright to the digital age, in reality it is blocking the exchange of knowledge and culture across borders today,” Reda’s core finding reads."
zagreb, croatia
Share this story
Delete

Party Like it’s 2000: Revisiting Crypto

1 Share

At the time when I first studied law, my interest in technology was entirely separate and parallel. On just one occasion they intersected, due to the requirement of a note from one’s tutor stating that the requested email address/shell account was necessary for purposes of scholarly activities; in those days emails were issued automatically only to maths and computer science students, everyone else had to demonstrate that they needed one.

There followed many all night sessions in the computer labs (the only buildings open 24 hours!) and conversations with nerds who began to drop in to the bookshop where I worked. Sometimes this just meant riffing about the exotic ideas encountered on Usenet (Ireland was seriously theocratic and very insular), but inexorably discussion would return to speculation on the political consequences of the new medium in two areas: copyright and surveillance/political control.

So when I later decided to return to law, it was natural to focus on these conflicts. My emphasis was originally on cryptography. In retrospect I guess this is because that moment was a kind of peak of political absurdity. Encryption technologies were still classed as dual use technologies by government, meaning that they had both civilian and military applications, and were thus subjected to a special regulatory regime limiting their export. At the same time the encryption software PGP (Pretty Good Privacy) was available for download from the net in flagrant breach of US export controls – the International Traffic in Arms Regulations (ITAR). Daniel Bernstein was challenging the constitutionality of these arrangements in the US while Phil Karn was filing requests with the US State Department to check whether a book, Applied Cryptography, and accompanying floppy disk were subject to export restrictions; it turned out the book wasn’t and the floppy was (I got a copies copy of both from amazon and never used either!). them!).

Investigative journalist Duncan Campbell had already uncovered the first bits of information about a surveillance dragnet called Echelon. Meanwhile the US government had spent years trying to inject compromised encryption systems via hardware into the public’s computers and phones via its Clipper Chip proposal. This would have provided law enforcement with a side-door entrance to encrypted communications on foot of a warrant obtained as part of an investigation, but required that the secret keys necessary for this be stored at a location accessible to the police. Were they to be excluded from access to plaintext, we were told, the consequences would be dire: the four horsemen of infocalypse – terrorists, drug dealers, paedophiles and money-launderers – would ride forth unleashing their villainy on the innocent. A little later there was an international scandal involving a shady Swiss firm called Crypto AG, who were supplying compromised encryption systems to governments. When the exploit was revealed the Vatican was the first ‘user’ to change its system. … In short, these were exciting times, the rock and roll period of the so-called crypto wars.

Absurdly it was still possible then to imagine a field of ‘computer technology and the law': the number of users was still small; the legal disputes actually reaching a judge were few: even the range of devices was limited. I gobbled it all up: digital signatures, data protection and copyright. Then I came across articles about Digital Rights Management systems and realized that where I had imagined a politically mobilized populace embracing PGP to engage in oppositional politics, it was more likely that users would encounter encryption as a lock preventing them from having access to the media cookie jar. Whereas the inability of governments to prevent civilian access to strong cryptography was foretold, the copyright and allied industries (mostly in the patent and trademark sectors) were well-organised, and had achieved considerable success in rewriting the law at both domestic (DMCA, European Copyright duration Directive) and international levels (GATT-TRIPS, WIPO-WCT). Thus in the United States the DMCA made it an offense both (a) to produce and distribute tools for the circumvention of DRM access controls on media and (b) to engage in the act of circumventing itself – irrespective of whether a breach of copyright occurred.

But the copyright industry’s victory turned out to be easier at the level of lobbying and legislation than it was in reality once these technologies were released into the wild. The dream of perfect technological control turned out to be a mirage. Worse, the internet ensured that once an access control technology was defeated once, it was effectively defeated everywhere, as the developers of the protection systems for DVDs and digital music formats were to discover at some expense. In 1999, just as the means to neutralize the DRM on DVDs was being made public, Napster, the first p2p system, appeared on the scene.

Thus at the turn of the millennium the struggle for public access to strong cryptography seemed to have been won, and the copyright industry’s efforts to retain control of distribution seemed to be skidding on the black ice of technological history. Such was the mood in January 2001 when Steven Levy’s celebratory account, Crypto, was published, with the unfortunate subtitle ‘How the Code Rebels Beat the Government Saving Privacy in the Digital Age ‘. By year’s end that tune would appear mistaken.

To be continued.


Read the whole story
marcell
3627 days ago
reply
zagreb, croatia
Share this story
Delete

We Experiment On Human Beings!

9 Comments and 14 Shares
I’m the first to admit it: we might be popular, we might create a lot of great relationships, we might blah blah blah. But OkCupid doesn’t really know what it’s doing. Neither does any other website. It’s not like people have been building these things for very long, or you can go look up a […]
Read the whole story
marcell
3766 days ago
reply
"The four-message threshold is our internal measure for a real conversation. And though the data is noisier, this same “higher display means more success” pattern seems to hold when you look at contact information exchanges, too.

This got us worried—maybe our matching algorithm was just garbage and it’s only the power of suggestion that brings people together. So we tested things the other way, too: we told people who were actually good for each other, that they were bad, and watched what happened.

As you can see, the ideal situation is the lower right: to both be told you’re a good match, and at the same time actually be one. OkCupid definitely works, but that’s not the whole story. And if you have to choose only one or the other, the mere myth of compatibility works just as well as the truth."

zagreb, croatia
Share this story
Delete
8 public comments
llucax
3767 days ago
reply
They are back! Andás with a new book!
Berlin
weelillad
3767 days ago
reply
Fascinating
Singapore
mgeraci
3768 days ago
reply
Hooray, the blog is back! We've all been really excited as these experiments were running and Christian was writing the post.
Duluth, MN
tdwiegand
3768 days ago
reply
Really happy to see a new post on this blog (even if it is partially an advertisement for a book)
Berkeley, California
jprodgers
3768 days ago
reply
I hope these come back. They always give me things to think about.
Somerville, MA
skorgu
3768 days ago
reply
OKtrends is back!
AaronPresley
3768 days ago
reply
Data is a hell of a thing.
Portland, OR
dreadhead
3768 days ago
reply
Always find these fascinating.
Vancouver Island, Canada

Dreaming of the online library

1 Comment

“In the catalog of History the Public Library is listed in the category of phenomena

that we humans are most proud of. Along with the free public education, public health care,

scientific method, Universal Declaration of Human Rights, Wikipedia, Free Software…”[1] 

Long before the slogan ‘information wants to be free’[2] was coined, Mevil Dewey - the forefather of the modern public library - dreamed of ‘free libraries for every soul’.[3] In 19th century America, Dewey was instrumental in campaigning for equitable access to books for all through a comprehensive system of free public libraries.  More importantly though, Dewey invented the Dewey Decimal Classification system, which for the first time organised library titles by subject matter, making book browsing possible and providing the basis of the organisation of the modern library. Previous to this books were sorted variously by accession date and size (sometimes even by colour), and the stacks were closed to all but the most privileged patrons. Anyone else was required to request a specific title for the librarian to retrieve or had one selected on their behalf. Browsing by topic was impossible, and huge swathes of literature were made fiendishly difficult to access for the uninitiated or under-educated, the system of searching for a book an early equivalent to the ‘I feel lucky’ option provided by Google rather than a detailed index of results. What Dewey did for libraries was to put each book unambiguously in its proper place via a logical system of classification, making it possible for anyone to find the book they wanted. Dewey’s system was a logical forerunner to the ways in which we browse and navigate information online today.  

It seems to be an unarguable fact that libraries are a public good if not a moral standard of a modern, civilised society. They are the physical and societal manifestation of a commonly held principle that it is in society’s best interests for information to flow freely, and there are very few voices that would publically dispute this point. However, over the past decade the fate of public libraries (in the western world at least) has been on a downward trajectory, threatened on all sides by political, economic and social factors. Previously held standards of access and preservation are under considerable threat, in some cases a very real threat of destruction and coercion, actual libricide not restricted to war torn countries or fundamentalist regimes. A current example, illustrative in many ways of the broad changes in governmental attitudes towards library collections across the Western world, is the declining situation of the Canadian national library and archives collection.[4]   

In 2012, the Conservative government of Canadian Prime Minister Stephen Harper announced major changes in the running of Libraries and Archives Canada (LAC), the public body charged with collecting, preserving and making accessible the documentary heritage of the Canadian nation. To put LAC in context, it is the Canadian equivalent of the British Library (the national library of the UK, whose tagline is: “Explore the world’s knowledge”) or the Library of Congress (National Library of the USA and the world’s largest library), the national repository for the written word. Under the guise of a modernisation programme, $9.6 million was cut from the LAC budget with the loss of four hundred archivists and librarians, the closure of national archive sites and the slashing of budgets for new books. ‘Information without enduring value’[5] would be subject to charges for access and the interlibrary-loans service was terminated, with a vague assurance that content would be available digitally in the future although at the time of the announcement only 4% of the existing collection was digitised.[6]  Although LAC committed themselves to acquiring ‘digital only’[7] content by 2017, this commitment was undermined by significant reductions in the number of digitisation staff. The dream of a universally accessible digital library was mobilised as a smokescreen for a large scale dismantling of the effective infrastructure of the LAC. Most worrying, however, was what happened after the closure of the national archives sites. What was promised was that materials to be discarded after the rationalisation process would be preserved digitally, but only an insignificant fraction of documents were scanned. The rest were sent to landfill or burned.[8] No records were kept of what was thrown away, what was burnt and what was lost. Particularly affected were specialist environmental science library collections,[9] holding decades worth of research on Canada’s environment, oceans and fisheries. An irreplaceable collection of logs from the 19th century voyage of HMS Challenger went to landfill.[10] The internal library of Health Canada was dismantled, along with eleven other ministries, and access was outsourced to a private company. This resulted in several Health Canada scientists resorting to setting up their own unofficial library collections, or borrowing student and lecturer library cards, to work around the long lead times and high fees of the new system.[11] 

While all this may seem to be an inevitable effect of the tight squeeze applied to the public budgets of most countries after the crash of 2008, it’s important to remember that Canada has outperformed both the US and Britain on a spectrum of positive economic indicators from 2009 onwards.[12] The loss of so much irreplaceable library and archive material must be seen in the context of what has been dubbed ‘Canada’s war on Science’,[13] the wide-ranging attempts by the climate change sceptical Conservative government to stifle dissenting scientific voices and supress inconvenient evidence, at a time when there is an on-going resource-rush to mine the natural wealth of Canada’s oceans and Northern territories. As Canadian biologist Jeff Hutchings puts it, “Losing libraries is not a neutral act”.[14] The destruction of libraries is an ideologically loaded act, one that undermines access to and exchange of knowledge. In this particular national context, it can be seen as an act of censorship in the interests of those in power, “an attempt to guarantee public ignorance”.[15] 

The destruction of books has long been a tool to silence dissenting voices and undermine cultural integrity. From the earliest days of the library there have been library burnings, ceremonial and public. What the persistence of this kind of libricide underlines is the principle that information is both a source of and a threat to power. Libraries may be a moral standard but they can also be sites of resistance, as comprehensive bodies of knowledge are more often than not contestational bodies also. While the previously mentioned phrase ‘information wants to be free’ is often quoted, we don’t usually get its full context: 

“On the one hand, information wants to be expensive, because it’s so valuable.

The right information in the right place just changes your life.

On the other hand, information wants to be free, because the cost of getting it out

is getting lower and lower all the time. So you have these two fighting against each other.”[16] 

Control over the flows of information in the age of networks and digital technology is the ultimate tool of power, and now more than ever this power is economic as well as political and cultural. For example: the economic power gained from the continuation of lucrative oil extraction projects in the Canadian tar sands. The digital world is in the process of establishing a new ruling class, a class of governmental institutions and private corporations whose power ultimately resides in a monopoly on information access and transmission. The most iconic corporations of the new century, Amazon, Google, Facebook et al, have made access to and control over huge swathes of data enormously lucrative. The value of these companies, as expressed by their share prices, often greatly exceeds their actual earnings:[17] their share value rather expresses the faith investors have in their future earnings, gained in a future where control over information transmission will be the engine of growth and source of wealth. Silicon valley ‘guru’ and philosopher of the network age, Jaron Lanier, calls these corporations ‘siren servers’, “giant corporate repositories of information about our lives… used for huge financial benefit by a super-rich few.”[18] The Internet simultaneously holds both the utopian promise of free access to all the world’s knowledge and the dystopian potential for the privatisation of all knowledge for the profit of small number of monopolistic corporations. Mass Internet access and the growth of online file-sharing has developed concurrently with a concerted resistance to the free exchange of information online and the privatisation of information streams through the application of intellectual property law, content pay walls and Digital Rights Management (DRM) technologies.[19] 

References:

[1] 2012. Public Library [Online]. Available: http://www.memoryoftheworld.org/public-library/ [Accessed 01/11 2013].

[2] The first instance of the phrase is attributed to Stewart Brand (writer and counterculture activist, best known as the founder of the Whole Earth Catalog) while in conversation with Steve Wozniak at the 1984 Hackers Conference.

[3] In his notebooks Dewey wrote of “My World Work- Free Schools and Free Libraries for every soul.” 

WIEGAND, W. 1996. Irrepressible Reformer: A Biography of Melvil Dewey, Chicago, American Libraries Association.

[4] While library cuts and closures in the UK have been significant, they are dwarfed by the haste and scale of library closure in Canada. A detailed and extensive timeline of these changes has been assembled by the Ex Libris Association at the University of Toronto, and can be found by searching for ‘LAC’ in the Ex Libris Wiki.

[5] Directly quoted from the New Service Model and the Directive on Record Keeping of LAC. 2012. New Service Model and the Directive on Recordkeeping [Online]. <a href="http://www.bac-lac.gc.ca:" rel="nofollow">www.bac-lac.gc.ca:</a> Libraries and Archives Canada. Available: http://www.bac-lac.gc.ca/eng/news/Pages/new-service-model-and-the-directive-on-recordkeeping.aspx [Accessed 20/01 2014].

[6] Data obtained from the Save Libraries and Archives Canada campaign. 2011. Save Libraries and Archives Canada: The Issues- Digitization [Online]. http://www.savelibraryarchives.ca: Save Libraries and Archives Canada Available: http://www.savelibraryarchives.ca/issues-digitization.aspx [Accessed 17/01 2014].

[7] Ibid.

[8] Cory Doctorow has blogged extensively about this subject on Boing Boing.

DOCTOROW, C. 04/01 2014. Canadian libricide: Tories torch and dump centuries of priceless, irreplaceable environmental archives. Boing Boing [Online]. Available from: http://boingboing.net/2014/01/04/canadian-libraricide-tories-t.html [Accessed 04/01 2014].

[9] Canadian science librarian John Dupois has assembled a comprehensive index of articles and sites related to environmental science library closures. 

DUPOIS, J. 2014. The Canadian War on Science: A chronological account of chaos & consolidation at the Department of Fisheries & Oceans libraries [Online]. <a href="http://scienceblogs.com" rel="nofollow">scienceblogs.com</a>. Available: http://scienceblogs.com/confessions/2014/01/10/the-canadian-war-on-science-a-chronological-account-of-chaos-consolidation-at-the-department-of-fisheries-oceans-libraries/ [Accessed 25/01 2014].

[10] Ibid.

[11] DOCTOROW, C. 01/02/2014 2014. Health Canada scientists setting up unofficial libraries as national libraries fail. Boing Boing [Online]. Available from: http://boingboing.net/2014/01/20/health-canada-scientists-setti.html [Accessed 20/01 2014].

[12] It should be noted that Canada’s growth rate has slowed somewhat from 2013 onwards, although it remains among the world’s most prosperous nations. Hours of fun can be enjoyed comparing economic data between countries on the website of the World Bank.

[13] See the website of the 2013 book ‘The War on Science’ by Chris Turner for more information.

[14] NIKIFORUK, A. 2013. What's Driving Chaotic Dismantling of Canada's Science Libraries? [Online]. <a href="http://thetyree.ca:" rel="nofollow">http://thetyree.ca:</a> The Tyree. Available: http://thetyee.ca/News/2013/12/23/Canadian-Science-Libraries/ [Accessed 19/01 2014].

[15] KLINKENBORG, V. 2013. Silencing Scientists. The New York Times, 21/09/2013, p.SR 10. Available: http://www.nytimes.com/2013/09/22/opinion/sunday/silencing-scientists.html

[16] The full quote and further context can be found on the website of IT Consultant Roger Clarke.CLARKE, R. 2000. "Information Wants to be Free ..." [Online]. http://www.rogerclarke.com. Available: http://www.rogerclarke.com/II/IWtbF.html [Accessed 10/01 2014].

[17] GASSÉE, J.-L. 02/02/2014 2012. Decoding share prices: Amazon, Apple and Facebook. The Guardian [Online]. Available from: http://www.theguardian.com/technology/2012/may/29/faceboo-amazon-apple-share-prices [Accessed 02/02 2014].

[18] ELMHIRST, S. 30/01/2014 2013. Digital Elite: A bleak vision of the future. New Statesman [Online]. Available from: http://www.newstatesman.com/sci-tech/sci-tech/2013/03/digital-elite-bleak-vision-future [Accessed 02/02 2014].

[19] DRM technologies are used to control digital content and devices after their sale, and can control viewing, copying, printing and altering of content. For more detail on opposition to DRM, see www.defectivebydesign.org.

[20] 2013. Publishing industry continues to grow with sales of consumer e-books rising by 134%. [Online]. publishersassociation.org.uk: Publishers Association. Available: http://publishersassociation.org.uk/index.php?option=com_content&view=article&id=2480:publishing-industry-continues-to-grow-with-sales-of-consumer-ebooks-rising-by-134 [Accessed 04/02 2014].

[21] FARRINGTON, J. 04/02/2014 2013. Bowker: self-published e-books 12% of sales. The Bookseller [Online]. Available from: http://www.thebookseller.com/news/bowker-self-published-e-books-12-sales.html [Accessed 04/02 2014].

[22] 2013. Sales of printed books slump in 2012 [Online]. bbc.co.uk: BBC. Available: http://www.bbc.co.uk/news/entertainment-arts-20908048 [Accessed 04/02 2014].

[23] The issue of online pay walls for academic journals publishing new research is a hotly contested one in the publishing industry and in academia. Further reading on this debate, specifically the academic boycott of the publisher Elsevier by the ‘Cost of Knowledge’ campaign, can be found here.

[24] “Amazon has shown a peculiar genius when it comes to squeezing more dollars out of publishers.”

MARCUS, J. 2013. The Mercenary Position: Can Amazon change its predatory ways? Harper's Magazine P99-102.

[25] In 2012, an Amazon Kindle user in Norway had her account wiped and all her content deleted by Amazon.KING, M. 2012. Amazon wipes customer's Kindle and deletes account with no explanation [Online]. theguardian.com: The Guardian. Available: http://www.theguardian.com/money/2012/oct/22/amazon-wipes-customers-kindle-deletes-account [Accessed 04/02 2014].

[26] JOHNSON, B. 2009. Amazon Kindle users surprised by 'Big Brother' move [Online]. theguardian.com: The Guardian. Available: http://www.theguardian.com/technology/2009/jul/17/amazon-kindle-1984 [Accessed 04/02 2014].

[27] BRAIKER, B. 2012. Apple accused by US of colluding with publishers to fix price of e-books [Online]. theguardian.com: The Guardian. Available: http://www.theguardian.com/technology/2012/apr/11/apple-accused-fixing-prices-ebooks [Accessed 04/02 2014].

[28] DOCTOROW, C. 01/02/2014 2013. Amazon requires publishers to use Kindle DRM. Boing Boing [Online]. Available from: http://boingboing.net/2013/10/11/amazon-requires-publishers-to.html [Accessed 19/01 2014].

[29] In England, around 79% of public libraries now lend e-books, a significantly higher percentage than in Scotland and Wales, where the official figures are approximately 40% and 50% of libraries, respectively. Northern Ireland has established a central e-book lending hub, with over 4000-fiction title available to borrow.

[30] LEECH, H. 2013. Did you know that 85% of e-books aren't available to public libraries? [Online]. http://shelffree.org.uk. Available: http://shelffree.org.uk/2013/02/19/did-you-know-that-85-of-e-books-arent-available-to-public-libraries/ [Accessed 20/01 2014].

[31] DOCTOROW, C. 05/02/2014 2011. HarperCollins to libraries: we will nuke your e-books after 26 checkouts. Available from: http://boingboing.net/2011/02/25/harpercollins-to-lib.html [Accessed 10/01 2014].

[32] There’s a YouTube video, made by a librarian, of the various HarperCollins titles in their collection checked out in excess of 26 times, including an print copy of Coraline with a ‘lifetime guarantee’.

[33] The full title of the report is An Independent Review of E-lending in Public Libraries in England, although it has become know as the Sieghart review after its author, William Sieghart

[34] SIEGHART, W. 2013. An Independent Review of E-Lending in Public Libraries in England. Department for Culture, Media and Sports. P6

[35] It should not go without note that these projects primarily privilege texts in the English language. PiracyLab, a research collective based in Columbia University is currently researching the geographical patterns of book piracy. The second most common text language on Library Genesis, one of the biggest illicit book sharing sites on the web, is Russian at 27% of Libgen content. By contrast, a quick calculation of the available figures for Project Gutenberg indicates that the 2nd most common language, French, makes up roughly 4% of Project Gutenberg content.

[36] These conventions were set by the Berne Convention for the Protection of Literary and Artistic Works, an international agreement on copyright that was first accepted in Berne, Switzerland, in 1886. One of the main agitators for the convention was none other than Victor Hugo.

[37] Works licensed under Creative Commons licensing are also available freely online, via sites such as Project Gutenberg. The author Cory Doctorow has released all of his books under the Creative Commons license.

[38] “Friction (…) matters. That’s why Apple and Amazon have had such success with the single click-to-buy button. To avoid piracy, it’s not enough to offer people a good product at a fair price. You also have to make buying as effortless as possible.”  

POGUE, D. 03/02/2014 2013. The E-Book Piracy Debate, Revisited. The New York Times [Online]. Available from: http://pogue.blogs.nytimes.com/2013/05/09/the-e-book-piracy-debate-revisited/ [Accessed 03/02 2014].

[39] Impakt 2013 was themed ‘Capitalism Catch-22’, and featured workshops, screenings, exhibitions and panel talks over a long weekend. Details and documentation of the festival are all available on the Impakt website.  

[40] The event was curated and organized by the Amsterdam Collective Monnik, inviting graphic designers, hackers, bookmakers and programmers to collaborate towards the production of a global peer-to-peer (P2P) library.  

[41] Tim Berners Lee, the British Computer Scientist is best know as the inventor of the World Wide Web in 1989. 

NB: The Internet had existed decades before this, what the World Wide Web instigated was the ability of the internet user to navigate documents stored on the internet through a browser. It was this invention that facilitated the opening up the Internet on a massive scale.

[42] The Memory of the World Project was launched at HAIP (Hack-Act-Interact-Progress) 2012, a festival of open technology in Ljubljana, Slovenia. The theme of the festival was ‘Public Library’. The HAIP site is offline but video documentation from the festival can be found here.

[43] 2013. Memory of the World: Objectives [Online]. memoryoftheworld.org: Memory of the World. Available: http://www.memoryoftheworld.org/blog/2013/06/15/memory-of-the-world-objectives/ [Accessed 02/11 2013]

[44] Open Source provides a model of production guided by the principle that ‘information wants to be free’. Open source software exists at the opposite end of an ideological spectrum from Kindle DRM and other kinds of proprietary, closed-source software that impose restrictive and legally binding conditions of use on its users.

[45] 2012. End-to-end catlog [Online]. Memory of the World. Available: http://www.memoryoftheworld.org/blog/2012/11/26/end-to-end-catalog/ [Accessed 01/11 2013].

[46] “…that’s why I insist on human librarians, sociality, community. Software should always come from the communication need. It should never pretend to bring the new desires.”

GARCIA, D. 2014. Public Library: Memory of the World [Online]. New Tactical Research. Available: http://new-tactical-research.co.uk/blog/1012/ [Accessed 17/02 2014].

[47] Ibid

[48] Ibid

[49] This argument is heavily indebted to the analysis of Public Library provided by David Garcia in his posting to the nettime mailing list.

GARCIA, D. 27/02 2014. RE: GARCIA, D. 27/02 2014. RE: Books as "a gateway drug"- Interview with Marcell Mars‏. Type to NETTIME-L@KEIN.ORG. Available: http://nettime.org/Lists-Archives/nettime-l-1402/msg00057.html [Accessed 27/02 2014].

[50] BALÁZS, B. 2013. Coda: A Short History of Book Piracy. In: KARAGANIS, J. (ed.) Media Piracy in Emerging Economies. New York: Social Science Research Council. P. 411

[51] Definitive figures on academic book piracy as a percentage of overall e-book piracy are very hard to come by. My observation is made on the basis of research conducted by PiracyLab into the <a href="http://Libgen.org" rel="nofollow">Libgen.org</a>, which offers over 800,000 books (some public domain, many others not) for download, mainly in the sciences and engineering.

[52] Some of the content of <a href="http://textz.com" rel="nofollow">textz.com</a> is archived here via the Internet Archive.

[53] Jan Phillipp Reemtsma of the Hamburg Foundation for the Advancement of Science and Culture. 

[54] MCCULLAGH, D. & LUETGERT, S. 2004. <a href="http://Textz.com" rel="nofollow">Textz.com</a> owner accused of German copyright crimes [ip] [Online]. <a href="http://seclists.org" rel="nofollow">http://seclists.org</a>. Available: http://seclists.org/politech/2004/Mar/36 [Accessed 20/01 2014].

Some useful resources for the would-be Cyber-Librarian:

aaaaarg.org

A reading group/online community sharing texts on philosophy, art and the social sciences.

archive.org

The Internet Archive is a non-profit digital library with the stated mission of "universal access to all knowledge." It provides permanent storage of and free public access to collections of digitized materials, including websites, music, moving images, and nearly three million public-domain books.

http://calibre-ebook.com

The official site for Calibre e-book management software.

http://circles.rooter.is

A fascinating data visualisation project, mapping the contents of <a href="http://aaarg.org" rel="nofollow">aaarg.org</a> and Library Genesis against each other. Click on any pixel in the Venn diagram to download a random title from the databases.

http://ebookoid.com

One of the biggest repositories of academic books online. Currently invite only.

http://github.com/marcellmars/letssharebooks

Download link for Marcell Mars ‘Let’s Share Books’ plugin for Calibre.

http://www.knowledgecommons.org/

The Open Knowledge Commons is a network of librarians, universities, students, lawyers, and technologists who are working on the challenges and opportunities of building a knowledge commons

http://libgen.org

Library Genesis is one of the world’s biggest e-book repositories. Based in Russia, it specialises in academic texts.

http://monoskop.org

A wiki for art, culture, media and technology, it also regularly uploads European literature and criticism to download as PDF.

http://openlibrary.org

A project of the Internet Archive (listed above), Open Library aims to build ‘one webpage for every book ever published’.

http://ubu.com

Ubuweb is an expertly curated online resource for avant-garde art and literature. It was founded in 1996 by the poet Kenneth Goldsmith.

The free exchange of information online poses a particular threat to the sustainability of the publishing industry in its current form, as more and more readers switch from physical books to e-reading. The market for e-books grew by 134% in 2012,[20] making up 13% of all consumer book purchases in the UK by 2013,[21] yet in the same year sales of printed books fell 4.6%.[22] While sales of e-books have begun to slow more recently, they are still one of the few potential growth areas for publishers, who have struggled to adapt to changing patterns of book buying. The rise of digital publishing has seen the vested interests of publishers and retailers closing down forms of open access to published materials to consumers and institutional users alike, from bestselling blockbusters to respected medical journals.[23] The distribution and sale of e-books is currently dominated by a single retailer: Amazon. Their notoriously sharp business practices have put pressure on the profit margins of even the biggest international publishers, despite the lower costs of producing and distributing digital books.[24]   

Amazon, already dominant in the market for print books, has become a virtual monopoly distributer of e-books.  The DRM technology attached to e-books sold via Amazon works to ensure this monopoly, restricting content for reading only on an Amazon Kindle, or an approved Kindle device or app.  Kindle DRM is specifically designed to ‘lock’ users to Amazon devices and software and to prevent sharing of e-books (although in a small concession, US Kindle users can ‘lend’ certain books to another reader, but only once and for just two weeks). Ultimately, any purchase of an Amazon e-book is not a guarantee of ownership of that particular title, as according to the terms of use Kindle content is licensed, not sold. Your Kindle library ultimately belongs to Amazon. Should you break any of the terms of use, Amazon may legally delete your user account and all the contents of your Kindle library.[25] This has happened: in 2009, American Kindle users found that copies of 1984 by George Orwell had been remotely deleted from their devices.[26] Apple also has a similar restrictive system in place for books sold via iBookstore (they are currently being sued, alongside 6 major publishers, for e-book price-fixing in the US).[27]   

The DRM technology that is the current orthodoxy in digital publishing, a system whereby a reader cannot own the book they have bought, is openly working against the interests of free and open exchange of information. Furthermore, it restricts freedom of choice by tying readers (and therefore publishers) to a single retailer via their particular e-reader. Amazon’s DRM system is “a one-stop way of converting the writer's customer into Amazon's customer. Forever.”[28] Amazon’s domination of the e-book market is squeezing publishers big and small, and the effects are being felt in the library system as it moves towards a digital service. The development of e-reading should come as a gift to beleaguered public libraries, as a switch to digital books offers to minimise costs for acquisition, storage and maintenance of a library’s catalogue. The amount of simultaneous loans made for popular titles could potentially be limitless. As more readers switch to e-reading, library membership could be revitalised as members could download titles directly from the library website without visiting a branch. In theory, a universally available national catalogue could be developed, pooling the collections of the nation’s libraries to create a catalogue of millions of books- a collection far more extensive than any regional or local library could ever hold. Once completed, it would remain free from the organic decay of paper and documents and require all but the most minimal of administration costs, ostensibly forever. There would inevitably be a large initial outlay involved in developing the infrastructure for such a project, but the resulting service would surely be in the spirit of Dewey’s vision of ‘free libraries for every soul’. The current situation of library e-book lending in the UK however leaves us very far from this promise, hampered by the interests of retailers and publishers alike, in a situation that illuminates the difficulties that lie in reconciling new technologies with existing systems in a way that is equitable to the publisher and reader alike. 

An Egyptian book restorer lays out burned and damaged books to dry in the garden of the Institut d'Égypte after a fire over the weekend nearly gutted the building and destroyed much of its contents in 2011.

Although the figures vary across the UK, a significant percentage of libraries in the UK now lend e-books[29] and the number of withdrawals is growing. However, public libraries in the UK and the US have struggled to develop agreements with publishers about e-book lending as publishers have sought to place limits on the availability of certain e-books for lending, reticent about the threat to further purchases should their e-books become widely and easily available for free. Of the top fifty-bestselling UK e-books in 2012, 85% were not made available to libraries.[30] In the US, the publisher HarperCollins has placed an upper limit on the amount of times that their e-book titles can be borrowed, allowing for twenty-six checkouts before the self-destruct limit- at which point libraries will have to repurchase the title.[31] The principle of this measure is to artificially mimic the life span of a printed book, although many librarians have challenged the twenty-six-checkout limit.[32] In 2013, the UK government commissioned an independent review panel to examine the state of e-lending in English libraries and make recommendations for future policy. The Sieghart Review,[33] led by a panel of senior figures from the library and publishing world, talks of a need to introduce ‘friction’ into e-book lending, to make it less easy to borrow e-books due to the fears of publishers and booksellers that, given a comprehensive digital library service, ‘borrowers might never need to buy another book’.[34] The review recommended a series of artificial constraints on e-book lending to protect the interests of publishers and booksellers such as would set 21st century versions of the limitations of a physical book supply, such as lending to only one reader at a time and having e-books deteriorate after a number of loans. However well intentioned the measures suggested in the Sieghart Review may be in its attempt to preserve the printed medium by protecting booksellers and publishers, imposing artificial constraints on e-book library lending seems a wholly retrograde measure.  Introducing ‘friction’ into the process of borrowing an e-book, effectively simulating the limits of physical media, seems to perversely defy the potential of digital technologies and goes against the principles of widening knowledge and education that the library movement should seek to embody. Friction matters in our interactions online, discouraging users and slowing down the flow of information. 

There are many admirable online projects, such as Europeana, the Digital Public Library of America and Project Gutenberg that are working towards a free, open and comprehensive online catalogue of the world’s knowledge. However, these projects inevitably have significant limits, namely the money available for digitisation in contributing states or the restrictions of copyright law.[35] All texts subject to copyright restrictions, which in the US at least apply to most books written after 1923, remain unavailable in free and open formats unless the author or their estate has specified otherwise. Furthermore, in the majority of world nations literary works remain under copyright until seventy years after the authors’ death, according to legal conventions drawn up at the end of the 19th century.[36] In many cases, copyright protection could extend for well over a century- a period far, far longer than the twenty year term of patent applicable for an invention. It is becoming increasingly evident that the old map of the literary terrain, drawn by publishers, retailers and authors and circumnavigated by century old copyright conventions, no longer corresponds to the territory. The mass digitisation of books, even if limited at the moment to those in the public domain,[37] has had a massive impact both on the architecture of the publishing industry and the expectations of readers. Not just the interactive friction but the outright moral poverty of copyright law and e-book DRM restrictions make illicit subversion of the existing system inevitable and, some might argue, necessary.[38] The map is now being redrawn everywhere, by readers frustrated with the limits of the existing system and impassioned by the founding principles of the public library movement. The collections of our public libraries should be considered as precious a national resource as North Sea Oil, assembled as they are from accretions of scholarship, creativity and culture. Libraries are a vital node- the heart even- of a vast social network of writers, students, researchers and scientists and their value should be measured not only in economic or educational terms, but also in terms of its interconnectedness and influence. We live now in a globally networked world, one in which digital technologies offer new and exciting possibilities for democratic engagement, open discussion, collaboration and knowledge exchange. If we contend that libraries are a public good, a moral standard of a civilised society, as their status in civic society is being challenged it become increasingly important to democratise, to make public, library collections in the most appropriate way for our times- making them available online. The potential for the democratisation of all knowledge is, literally, at our fingertips. There are many readers who, frustrated with the limits of the existing system, are turning to piracy and file sharing to create their own, underground libraries. Where the state can’t, or won’t provide a universal, accessible library service, there are now numerous virtual communities coming together online to imagine alternatives. In many cases, these experimental designs are becoming actual structures, testing grounds for distributed, democratic networks of cyber-librarians. 

In November 2013, I had the good fortune to visit Impakt 2013[39] and sit-in on one such project: a 24 hour book-hackathon ‘Free Libraries for Every Soul’,[40] ‘an ode to books and libraries’ that explored the potential of a user generated online library to fulfil Dewey’s dream. While the emphasis of the hackathon was on troubleshooting and refining the technical details of digital publishing, book scanning and library sharing, the animating force driving the technical discussion was the utopian dream of free and open access to all the world’s knowledge. The curators, in the event notes, write about how “libraries were our first databases, books our first webpages”, how Tim Berners-Lee[41] was inspired to invent the hyperlink, the references that knit the web together, by footnotes in text. The notes served as a reminder of how the Gutenberg press was once as much a revolutionary, democratising information technology as the Internet is now, and the ways in which we organise and access information online is indebted to the history of print. To take the analogy further, the Internet is one further step in the evolution of publishing, and the development of a universal online library of all the worlds’ knowledge is its inevitable and desirable outcome. 

‘Cyber-librarian’, artist and free software activist Marcell Mars opened the event with a talk introducing the audience to Public Library: Memory of the World, a collaborative online library project.[42] Involving hackers, academics and activists among its contributors, the mission of Memory of the World is to facilitate the preservation of and access to the world’s documentary heritage, to make it “permanently accessible to all without hindrance”.[43] This isn’t just a technical project; it’s primarily an ideological and political one. Many of the contributors to the Memory of the World project hail from the former Yugoslavia, where during the Balkans war of the late 1990’s many books and texts were destroyed during the inter-ethnic violence; books in Cyrillic script and left-wing literature in particular were singled out for destruction. Public Library proposes a complete circumvention of the existing system of book distribution, building a peer-to-peer library system where users become the librarians of their own collection of digital books and can share and borrow in total freedom. At the core of this project are principles inspired by Open Source software communities[44] in their promotion of free access to information and collaboration between users and developers. Contributors to Memory of the World can build their own personal library using Calibre, open source digital library software that allows users to make and maintain their own catalogue of books. Calibre supports a huge number of e-book formats (and can easily convert formats between them), is supported by a large number of devices and has it’s own built-in reader. The books in your Calibre collection will never be subject to arbitrary deletion, as Open Source software requires no terms of use agreement. The content, and the programme, is yours to do with as you wish. To that end, Marcell Mars as part of Public Library has developed a widget for Calibre called ‘Let’s Share Books’, which allows users to share their library of books with others online, through a temporary URL. To quote Marcell Mars, “with Calibre, it is easy to be a librarian”. Encouraging users to become active librarians is key to understanding Public Library and what differentiates it from something like Google Books, or even Project Gutenberg. It would be wrong to interpret Public Library as proposing a technological solution to the library problem, via a single piece of software: Calibre is a tool, an important tool but a means to and end. Public Library must be understood as a much broader project, just as the public library system itself is a social and political project, as well as an educational one. Human engagement in cataloguing and sharing books is at the heart of Public Library, building a distributed social network of bibliophiles and a living archive of culture as librarians synchronise their catalogues, recommend and share books.           

“With books ready to be shared, meticulously catalogued, everyone is a librarian.

When everyone is librarian, library is everywhere.”[45] 

Public Library insists on sociality, community and communication[46]- much like any physical public library. Users actively build and control the catalogue, in conversation with each other (alike to how content is built on Wikipedia) to give an accurate and contestable body of information around each book. Recommendations are generated from other users, from ‘soft’ human skills of association rather than from algorithmic ‘hard’ data. Users, the cyber-librarians, are the “anchors”[47] that ground Public Library. In this way, it reflects on the shortcomings of the current system of book distribution where books become just another product to be sold, rather than a common good to be shared. Much more broadly, it reflects on the moral poverty of the current system of information capitalism, where information is increasingly becoming a jealously guarded commodity that serves to prop up corporate monopolies rather than a common good, proliferating in free exchange. Public Library provides a proposal for how we might design a library for the digital age, and there is much that sceptics of digital reading will find to criticise. To them, I say that it is perhaps better to understand Public Library as an artwork rather than an answer. David Garcia suggests that Public Library substitutes Joseph Beuys’s “everyone is an artist” for “everyone is a librarian”, as the role of an artist offers the unique freedom of expression that is reserved for creative works, the “special kind of licence and protection for necessary acts of civil disobedience”.[48] It would be a profound misunderstanding of the purpose of Public Library to dismiss the concept as a naïve utopian dream. Rather than providing definitive solutions, it extrapolates a series of vital issues of community, democracy and power relations arising from the subject of the public library, issues that raise serious questions about our current political and economic status quos.[49]   

The current system of copyright and intellectual property law, DRM restrictions and limited library e-lending simply does not fit to the reality of the digital world.  And in the spaces of misalignment readers are increasingly turning to illicit, even illegal activity in order to access the books that they need. Media piracy typically creeps in where a given market lacks competition and fails to keep up with the demands of consumers that are driven by changes in technology.[50] Similarly, where the public library system fails to address the demand for e-book lending, underground libraries will continue to step in to fill in the gaps. It is telling that textbooks are among the most commonly pirated books,[51] as academic publishing has been among the slowest sectors of the industry to adapt to new technologies. It seems inevitable that a whole-scale restructuring of the both the book publishing and retailing industries must take place to address these criticisms.

One of the most vivid arguments for reform comes from Sebastien Lüetgart, artist, theorist and founder of the (now defunct) text sharing website <a href="http://textz.com" rel="nofollow">textz.com</a>.[52] It is contained in an open letter to Jan Philipp Reemtsma, a German philanthropist[53] whose foundation held copyrights over the works of Theodor W. Adorno. In 2004, Reemtsma obtained a warrant of arrest for Lüetgart, claiming as owner of <a href="http://textz.com" rel="nofollow">textz.com</a> he had trespassed on their intellectual property rights by publishing two of Adorno’s essays online.

The open letter reads:

“We are glad to announce that, effective today, every single work by Adorno […] that you claim as your "intellectual property" has become part of the very public domain that had granted you these copyrights in the first place.

Of course they will not be available instantly, and of course we will not publish them ourselves - but you can take our word that they will be out, in countless locations and formats, and that not even a legion of lawyers will manage to get them back.

Maybe it helps if you think of your "intellectual property" as a genie, and of your foundation as a bottling business.”[54] 

The music industry, and indeed some innovative publishers (such as sci-fi and fantasy publisher Tor) have already gone some way to addressing the problem by dropping DRM from their files, in effect conceding that many of the pirates’ demands were legitimate. Libraries, publishers and retailers are going to have to follow in their footsteps. They must learn from the pirates, even assimilate their practices as opposed to bearing down on them with the full force that the law allows. Piracy has a long and persistent history, and despite the best efforts of publishers and retailers it only takes one individual to crack a devices’ DRM and share it online for the content to fall out of the control of copyright owners, forever.

While there is still a dedicated core of cyber-librarians, committed to the principle that information ought to be free, publishers cannot resist. Instead, they must adapt.   

Aideen Doran

March 2014

Screenshot of http://circles.rooter.is Data visualisation, mapping ebook catalogues of <a href="http://aaaarg.org" rel="nofollow">aaaarg.org</a> against that of Library Genesis. Each pixel directs to a download link for a book from either sites catalogue.

Created as part of HAIP (Hack-Act-Interact-Progress) festival, Lublijana, 2012

Read the whole story
marcell
3901 days ago
reply
"Public Library provides a proposal for how we might design a library for the digital age, and there is much that sceptics of digital reading will find to criticise. To them, I say that it is perhaps better to understand Public Library as an artwork rather than an answer. David Garcia suggests that Public Library substitutes Joseph Beuys’s “everyone is an artist” for “everyone is a librarian”, as the role of an artist offers the unique freedom of expression that is reserved for creative works, the “special kind of licence and protection for necessary acts of civil disobedience”.[48] It would be a profound misunderstanding of the purpose of Public Library to dismiss the concept as a naïve utopian dream. Rather than providing definitive solutions, it extrapolates a series of vital issues of community, democracy and power relations arising from the subject of the public library, issues that raise serious questions about our current political and economic status quos."
zagreb, croatia
Share this story
Delete

Google's Game Of Moneyball In The Age Of Artificial Intelligence – ReadWrite

1 Comment

Over the past couple of months, Google has been playing its own peculiar game of Moneyball. It may not make a ton of sense right now, but Google is setting itself up to leave its competitors in the lurch as it moves into the next generation of computing.

Google has been snatching up basically every intelligent system and robotics company it can find. It bought smart thermostat maker Nest earlier this month for $3.2 billion. In the robotics field, Google bought Boston Dynamics, Bot & Dolly, Holomni, Meka Robotics, Redwood Robotics and Schaft.inc to round out a robot portfolio group that is being led by Android founder Andy Rubin. 

When it comes to automation and intelligent systems, Google started its acquisition spree in late 2012 with facial recognition company Viewdle and has continued picking up neural networks since then, including the University of Toronto's DNNResearch Inc. in March 2013, language processing company Wavii in April and gesture recognition company Flutter in October. Google bought a computer vision company called Industrial Perception in December and continued its spree with a $400 million acquisition of artificial intelligence gurus DeepMind Technolgies earlier this week.

Dizzy yet? The rest of the technology industry surely is. It's hard to compete with Google’s acquisition rampage when it seemed like Google had very little rhyme or reason to the purchases it was making as they were happening. Google beat out Facebook for DeepMind while Apple had interest in Nest. But, after more than a dozen large purchases, Google’s strategy is finally becoming clear.

Google is exploiting an inefficiency in the market to become the leader in the next generation of intelligent computing.

Google's Moneyball Strategy

Moneyball is a term coined by author Michael Lewis in his 2003 book, Moneyball: The Art Of Winning An Unfair Game. The term is often falsely associated with the use of advanced statistical models used by Billy Beane, the general manager of the Oakland Athletics, to build a club that could thrive in major league baseball. But the principle of Moneyball is not actually about using stats and data to get ahead, it is about exploiting systems for maximum gain by acquiring talents that are undervalued by the rest of the industry.

This is exactly what Google is doing: exploiting market inefficiency to land undervalued talent. Google determined that intelligent systems and automation will eventually be served by robotics and has gone out of its way to acquire all of the pieces that will serve that transformation before any of its competitors could even identify it as a trend. By scooping up the cream of the crop in the emerging realm of robotics and intelligent systems, Google is cornering the market on talented engineers ready to create the next generation of human-computer interaction.

Technology Review points out that Google's research director Peter Norvig said the company employs “less than 50 percent but certainly more than 5 percent” of the world’s leading experts on machine learning. That was before Google bought DeepMind Technologies. 

To put Google’s talent hoarding into context, remember that many companies are struggling just to find enough talent to write mobile apps for Android and iOS. When it comes to talented researchers focused on robotics and AI components like neural networks, computer vision and speech recognition, the talent pool is much smaller, more exclusive and far more elite. Google has targeted this group with a furious barrage of aggressive purchases, leaving the rest of the industry to wonder where the available talent will be when other companies make their own plays for building next-generation intelligent systems.

“Think Manhattan project of AI,” one DeepMind investor told technology publication Re/code this week. “If anyone builds something remotely resembling artificial general intelligence, this will be the team.”

Of course, Google is not the only company working on intelligent systems. Microsoft Research has long been involved in the creation of neural networks and computer vision, while Amazon has automated many of its factories and built cloud systems that act as the brains of the Internet. The Department of Defense research arm, Defense Advanced Research Projects Agency (DARPA), has long worked on aspects of artificial intelligence, and a variety of smaller startups are also working on their own smaller-scale intelligent platforms. Qualcomm, IBM and Intel are also working on entire systems—from chipsets to neural mesh networks—that could advance the field of efficient, independent AI.

What Is Google Trying To Accomplish?

To understand what Google’s next "phase" will look like, it is important to understand the core concepts that comprise Google. 

Google’s core objective—which has never really changed despite branching out into other areas of computing—is to accumulate and make accessible all of the knowledge in the world. Google makes money by charging advertisers access to this knowledge base through keywords in its search engine. If there is a brilliance to Google’s business model, it is that the company has basically monetized the alphabet.

The work is nowhere near done, but Google has already done an impressive job over the last 16 years making the world’s knowledge available to anyone with Internet access. Thanks to Google, the answer to just about any question you could think of asking is at your fingertips, and with smartphones and ubiquitous mobile computing, that information is now available wherever you go. 

The next step for Google is to deliver that information to users with automated, intellectual context. The nascent Google Now personal assistant product that Google has been driving is the first step in this, but it has a lot of room to grow. 

If we take the concept of Google monetizing the alphabet and apply it to everyday objects, we can see where artificial intelligence come into play for how Google plans on changing the fundamental nature of computing.

What if you could use a device—like a smartphone, Google Glass or a smartwatch—to automatically identify all relevant information in your area and deliver it to you contextually before you even realize you want it?

If we mix the notion of ubiquitous sensor data laden within the Internet of Things with neural networks capable of "computer vision" and comprehending patterns on their own, then we have all the components of a personalized AI system that can be optimized to every individual on the planet. 

From the movie "Her." From the movie "Her."

Academic researchers call these computing concepts “deep learning.” Deep learning is the idea that machines could eventually learn by themselves to perform functions like speech recognition in real-time. Google's prospect is to apply deep learning to the everyday machines like smartphones, tablets and wearable computers. 

But what about all the robots Google just purchased? This is a little trickier to discern, but if Google does eventually figure out the intricacies of artificial intelligence, it can then apply these principles to an army of automated machines that function without human interference. For instance, with computer vision, machine learning and neural networks, Google could deploy robots to its maintain its data centers. If a server breaks or is having problems, a robot could pay it a visit, tap into its internal diagnostics or see that it is having a hardware issue. Google’s driverless car could benefit from all of these technologies as well, including speech and pattern recognition.

Google’s research into robotics and deep learning doesn’t have to mean this new technology will be restricted to beefing up its current products. Advances in cognitive computing can be applied to many, many different types of industries, from manufacturing to analyzing financial data at large banks. Like the lessons learned with smartphones, the applications of machine learning technology can be applied almost anywhere—as long as patents don't apply.

Google Has All The Ingredients To Make AI Work

Google has brains. Lots of different kinds of brains.

From a people perspective, Google’s head of engineering, Ray Kurzweil, is the world's foremost expert on artificial intelligence. Andy Rubin joined Google in 2005 when the company purchased his Android platform to create a smartphone operating system. But currently, Rubin is taking his job of building Android much more literally as he now heads up Google's fledgling robotics department. Jeff Dean is part of a senior fellow within Google's research division working in the company's “Knowledge” (search-focused) group, which will most likely be the team to incorporate DeepMind.

Jeff Dean, courtesy Research at Google Jeff Dean, courtesy Research at Google

Those names are just a few examples of Google's best brains at work. Google also has plenty of machine/computer brains that perform the bulk of the heavy lifting at the company.

The search product has been tweaked and torqued over the years to be one of the smartest tools ever created. In its own way, Google search has components of what researchers call “weak” artificial intelligence. Google’s server infrastructure that helps run search (as well as its Drive personal cloud, Gmail, Maps and other Google apps) is one of the biggest in the world, behind Amazon but on par with companies like Microsoft and Apple. 

Google has the devices necessary to put all that it creates into the hands of people around the world. Through the massively popular Android mobile operating system, it fledgling computer operating system in Chrome OS and accessory devices like Google Glass or its long-rumored smartwatch, Google can push cognitive, contextual computing to the world.

All Google needs to do is make artificial intelligence a reality and plug its components into its large, seething network and see what happens. That is both very exciting and mildly terrifying at the same time.

The Risks For Google, The Internet & The World

“Behold, the fool saith, 'Put not all thine eggs in the one basket.' Which is but a matter of saying, "Scatter your money and your attention." But the wise man saith, 'Pull all your eggs in the one basket and ... WATCH THAT BASKET.'" ~ Mark Twain, Puddn'head Wilson

In 1991, DARPA pulled much of its funding and support for research into neural networks. What followed was a period researchers called an “AI Winter,” where the field of artificial intelligence research became stagnant and did not progress forward in any meaningful way. The AI Winter from the 1990s was not the first and might not be the last.

Google is accumulating many individual brains in the field of AI into one big basket. If Google fails (or loses interest) to create the next generation of artificial intelligence, another AI Winter could definitely be a possibility.

Google also betting a lot of money on the fact that it can take the components of artificial intelligence and robotics and apply it to everything it touches. Between DeepMind and Nest, Google has spent $3.6 billion in the automation industry this year and those were just two companies. Google has lots of eggs in this basket, and if it fails, it could cost Google years, employees and the bleeding edge of next-generation computing.

Academics and pundits are also worried about the implications of privacy with Google’s chase of the contextual Internet centered around the individual. With Nest, Google could know just about everything that you do in your home (when you leave, when you get home, how many people are in the house and so forth). Part of aggregating the world’s knowledge is parsing information about the individual’s that inhabit that world. The more Google knows about us, the better it thinks it can enhance our lives. But the privacy implications are immense and well-justified.

There is also a large ethics question surrounding the use of artificial intelligence. Some of it centers around science fiction-like scenarios of enabling robots to take over the world (like Skynet in Terminator or the robots in The Matrix). But, in addition to privacy concerns, there are many different ways AI could be abused to severely augment the service economy. As part of the DeepMind acquisition, Google agreed to create an ethics board for how it will use the technology in any future applications.

Read the whole story
marcell
3923 days ago
reply
"the principle principal of Moneyball is not actually about using stats and data to get ahead, it is about exploiting systems for maximum gain by acquiring talents that are undervalued by the rest of the industry.



This is exactly what Google is doing: exploiting market inefficiency to land undervalued talent. "
zagreb, croatia
Share this story
Delete

Can someone explain WhatsApp’s valuation to me?

1 Comment

Unless you were off the internet yesterday, it’s old news that WhatsApp was purchased by Facebook for a gobsmacking $16B + $3B in employee payouts. And the founder got a board seat. I’ve been mulling over this since the news came out and I can’t get past my initial reaction: WTF?

Messaging apps are *huge* and there’s little doubt that WhatsApp is the premier player in this scene. Other services – GroupMe, Kik, WeChat, Line, Viber – still have huge user numbers, but nothing like WhatsApp (although some of them have even more sophisticated use cases). 450M users and growing is no joke. And I have no doubt that WhatsApp will continue on its meteoric rise, although, as Facebook knows all too well, there are only so many people on the planet and only so many of them have technology in their pockets (even if it’s a larger number than those who have bulky sized computers).

Unlike other social media genres, messaging apps emerged in response to the pure stupidity and selfishness of another genre: carrier-driven SMS. These messaging apps solve four very real problems:

  • Carriers charge a stupidly high price for text messaging (especially photo shares) and haven’t meaningfully lowered that rate in years.

  • Carriers gouge customers who want to send texts across international borders.
  • Carriers often require special packages for sending group messages and don’t inform their customers when they didn’t receive a group message.
  • Carriers have never bothered innovating around this cash cow of theirs.

So props to companies building messaging apps for seeing an opportunity to route around carrier stupidity.

I also get why Facebook would want to buy WhatsApp. They want to be the company through which consumers send all social messages, all images, all chats, etc. They want to be the central social graph. And they’ve never managed to get people as passionate about communicating through their phone app as other apps, particularly in the US. So good on them for buying Instagram and allowing its trajectory to continue skyrocketing. That acquisition made sense to me, even if the price was high, because the investment in a photo sharing app based on a stream and a social graph and mechanism for getting feedback is huge. People don’t want to lose those comments, likes, and photos.

But I must be stupid because I just can’t add up the numbers to understand the valuation of WhatsApp. The personal investment in the app isn’t nearly as high. The photos get downloaded to your phone, the historical chats don’t necessarily need to stick around (and disappear entirely if a child accidentally hard resets your phone as I learned last week). The monitization play of $.99/year after the first year is a good thing and not too onerous for most users (although I’d be curious what kind of app switching happens then for the younger set or folks from more impoverished regions). But that doesn’t add up to $19B + a board seat. I don’t see how advertising would work without driving out users to a different service. Sure, there are some e-commerce plays that would be interesting and that other services have been experimenting with. But is that enough? Or is the plan to make a play that guarantees that no VC will invest in any competitors so that all of those companies wither and die while WhatsApp sits by patiently and then makes a move when it’s clearly the only one left standing? And if that’s the play, then what about the carriers? When will they wake up and think for 5 seconds about how their greed is eroding one of their cash cows?

What am I missing? There has to be more to this play than I’m seeing. Or is Facebook just that desperate?

(Originally posted at LinkedIn. More comments there.)

Read the whole story
marcell
3926 days ago
reply
"that doesn’t add up to $19B + a board seat. I don’t see how advertising would work without driving out users to a different service. Sure, there are some e-commerce plays that would be interesting and that other services have been experimenting with. But is that enough? Or is the plan to make a play that guarantees that no VC will invest in any competitors so that all of those companies wither and die while WhatsApp sits by patiently and then makes a move when it’s clearly the only one left standing? And if that’s the play, then what about the carriers? When will they wake up and think for 5 seconds about how their greed is eroding one of their cash cows?

What am I missing? There has to be more to this play than I’m seeing. Or is Facebook just that desperate?"
zagreb, croatia
Share this story
Delete
Next Page of Stories