There’s a good read about a few oligopolistic publishers that proposed unethical surveillance technologies for academic libraries. I thought that the original article, while addressing a lot and from a variety of people, was missing some perspective from librarians on the subject. I wrote some points about this third-party potential for breaching confidentiality in a library and an ethical approach from librarians on my other blog.
The Mastodon social network system is the most promising advance I’ve seen recently toward establishing a better, more compelling social networking system.
I’ll explain why I think it’s worth leaving closed networks like Twitter, Facebook, Google Plus, etc. for Mastodon. I’d also like to say a little about how Mastodon works and mention something nice for the academic community. Perhaps you use something like Academia.edu? Perhaps you’ve heard of ScholarlyHub.org? Then perhaps you should know about Scholar.social. But first, Mastodon. Continue reading “Switch to a Mastodon Social Network”
Google announced that it would not continue developing Google Wave. At first read I thought this was an awful decision–Google Wave is a truly incredible product, which although it takes some getting used to, has huge potential. I thought Wave was one of the most important developments on the Internet since the Web. I was arguing in a previous post that Wave would be massively disruptive, disintermediating social activity on the Web while doing a lot of other very interesting things. After a bit more reflection, I think there may be something more interesting in Google’s announcement, and I don’t think it’s as simple as killing Wave. Continue reading “Wave’s Death Could be Preparation for a Rebirth”
OpenFile (openfile.ca) opened its public beta today. It’s attempting to develop a new means for news reporting. I discovered it from a colleague’s Twitter post and was quickly fascinated by the OpenFile model, which I think might have found a sweet way to conjoin citizen media with professional news reporting. Continue reading “New Way of News: OpenFile”
Ad hoc social networks: right now that’s what I’m calling the disruption Google Wave will wreak. I’m looking forward to it leaving the invite-only preview. It’ll be like kudzu sprouting everywhere, from its quiet persistance in the nooks and crannies of the Web, right on through to the most popular gathering spots.
Google Wave, or maybe more accurately, the open source Wave protocol could be the most important innovation to our interaction with the Internet since the development of the Web. Continue reading “Start the Wave: Disintermediating Social”
I’ve been talking about computer-assisted shallow atom assembly (CASAA) in my posts thinking about how we acquire knowledge in life with the pervasive Internet. Yesterday I read about Microsoft’s new search engine, Bing, which they’re actually calling a “decision engine.” From what I’ve read they’re making a clear effort to push search in the CASAA direction. Look how Balmer describes it: Continue reading “CASAA Birthing – New Decision and Knowledge Engines”
In a recent post, Nova Spivack considers “the stream” as the Internet’s next evolutionary stage. I think he makes a lot of compelling points and I’m clearly partial to stream terminology (like it says above, I’m trying to mind the current). It builds on McLuhan’s notion of the nervous system, which is neat. Spivack’s conceptualization of recent Web innovations are something akin to a stream of consciousness, or more specifically streams of thought and conversation. But I end up wondering how fluid this stream really is. Continue reading “The Nervous System’s Emerging Stream”
In a previous post, I said that search engines essentially accomplished their jobs but created a big problem.
Search engines initially answered our question of “How or where can I find the information I want?” but in indexing the content of the Internet and providing access, they created a much more troubling problem. That question tends to overshadow another question, which is equally if not more important, “How do I assemble knowledge from the information I find?” That question will be solved by computer-assisted shallow atom assembly, which I think may be a new significant stage of Internet-related development. Continue reading “Acquiring Knowledge: Computer-Assisted Shallow Atom Assembly (2)”
Has our approach to acquiring knowledge moved from the deep end of a continuum to the broad but shallow end? The Internet medium and associated technologies used to develop, contribute, and distribute knowledge with it, call out for knowledge acquisition through breadth. I think, in general, we’re using it to acquire knowledge via a great shallow breadth of sources over acquiring it via single deep sources. We’re developing an acceptance that acquiring knowledge via a great shallow breadth delivers an equivalent fulfillment of knowledge and in most cases, we may even be developing a preference for this method of knowledge acquisition. Continue reading “Acquiring Knowledge: A Great Shallow Breadth Over Depth (1)”
The effort to perpetuate culture, knowledge, and whatever else we store on certain media is not the only reason we need to consider an imperative to copy. I read today that Michael Moore’s new film has spread through the peer-to-peer networks. This news doesn’t interest me so much as the point being made about why this may inadvertently have been beneficial to his efforts.
According to the article I linked above, Moore says “We took measures a few weeks ago to place a master copy of this film in Canada so if they did take our negative we would have a duplicate negative of this film in Canada.” He’s referring to his concern that the US government might confiscate the film since a portion of it was filmed in Cuba, which is essentially off-limits to American interests.
Moore is calling attention to the fate of the singleton copy. Obviously, without other instances of it, we’d have little to no chance to take-in what it portrays (one’s taste for whether that’s good or bad is not the point). Rather the politics and special interests that might prevent a copy from being exposed to the public or perpetuating itself are constantly at work as sister forces to the destruction brought by time.
As the article reports, Moore’s film (perhaps not by his own intention) survives this fate through the Internet’s means of replication. Digital media, with its special capacity for being copied and distributed (even through artificial boundaries) prevent the film’s disappearance. Once it’s free and the interest is there, the information gets propagated, surviving forces that would otherwise erode it in the waters of Lethe.
As an aside, my last post on mass replicability along with this one are carrying me toward a larger point. I’m slowly working on it. Actually I think it has something to do with Heidegger.