Switch to a Mastodon Social Network

The Mastodon social network system is the most promising advance I’ve seen recently toward establishing a better, more compelling social networking system.

I’ll explain why I think it’s worth leaving closed networks like Twitter, Facebook, Google Plus, etc. for Mastodon. I’d also like to say a little about how Mastodon works and mention something nice for the academic community. Perhaps you use something like Academia.edu? Perhaps you’ve heard of ScholarlyHub.org? Then perhaps you should know about Scholar.social. But first, Mastodon. Continue reading “Switch to a Mastodon Social Network”

Wave’s Death Could be Preparation for a Rebirth

Google announced that it would not continue developing Google Wave. At first read I thought this was an awful decision–Google Wave is a truly incredible product, which although it takes some getting used to, has huge potential. I thought Wave was one of the most important developments on the Internet since the Web. I was arguing in a previous post that Wave would be massively disruptive, disintermediating social activity on the Web while doing a lot of other very interesting things. After a bit more reflection, I think there may be something more interesting in Google’s announcement, and I don’t think it’s as simple as killing Wave. Continue reading “Wave’s Death Could be Preparation for a Rebirth”

Start the Wave: Disintermediating Social

Ad hoc social networks: right now that’s what I’m calling the disruption Google Wave will wreak. I’m looking forward to it leaving the invite-only preview. It’ll be like kudzu sprouting everywhere, from its quiet persistance in the nooks and crannies of the Web, right on through to the most popular gathering spots.

Google Wave, or maybe more accurately, the open source Wave protocol could be the most important innovation to our interaction with the Internet since the development of the Web. Continue reading “Start the Wave: Disintermediating Social”

CASAA Birthing – New Decision and Knowledge Engines

I’ve been talking about computer-assisted shallow atom assembly (CASAA) in my posts thinking about how we acquire knowledge in life with the pervasive Internet. Yesterday I read about Microsoft’s new search engine, Bing, which they’re actually calling a “decision engine.” From what I’ve read they’re making a clear effort to push search in the CASAA direction. Look how Balmer describes it: Continue reading “CASAA Birthing – New Decision and Knowledge Engines”

The Nervous System’s Emerging Stream

In a recent post, Nova Spivack considers “the stream” as the Internet’s next evolutionary stage. I think he makes a lot of compelling points and I’m clearly partial to stream terminology (like it says above, I’m trying to mind the current). It builds on McLuhan’s notion of the nervous system, which is neat. Spivack’s conceptualization of recent Web innovations are something akin to a stream of consciousness, or more specifically streams of thought and conversation. But I end up wondering how fluid this stream really is. Continue reading “The Nervous System’s Emerging Stream”

Acquiring Knowledge: Computer-Assisted Shallow Atom Assembly (2)

In a previous post, I said that search engines essentially accomplished their jobs but created a big problem.

Search engines initially answered our question of “How or where can I find the information I want?” but in indexing the content of the Internet and providing access, they created a much more troubling problem. That question tends to overshadow another question, which is equally if not more important, “How do I assemble knowledge from the information I find?” That question will be solved by computer-assisted shallow atom assembly, which I think may be a new significant stage of Internet-related development. Continue reading “Acquiring Knowledge: Computer-Assisted Shallow Atom Assembly (2)”

Acquiring Knowledge: A Great Shallow Breadth Over Depth (1)

Has our approach to acquiring knowledge moved from the deep end of a continuum to the broad but shallow end? The Internet medium and associated technologies used to develop, contribute, and distribute knowledge with it, call out for knowledge acquisition through breadth. I think, in general, we’re using it to acquire knowledge via a great shallow breadth of sources over acquiring it via single deep sources. We’re developing an acceptance that acquiring knowledge via a great shallow breadth delivers an equivalent fulfillment of knowledge and in most cases, we may even be developing a preference for this method of knowledge acquisition. Continue reading “Acquiring Knowledge: A Great Shallow Breadth Over Depth (1)”

Mass Replicability – Part 2

The effort to perpetuate culture, knowledge, and whatever else we store on certain media is not the only reason we need to consider an imperative to copy. I read today that Michael Moore’s new film has spread through the peer-to-peer networks. This news doesn’t interest me so much as the point being made about why this may inadvertently have been beneficial to his efforts.

According to the article I linked above, Moore says “We took measures a few weeks ago to place a master copy of this film in Canada so if they did take our negative we would have a duplicate negative of this film in Canada.” He’s referring to his concern that the US government might confiscate the film since a portion of it was filmed in Cuba, which is essentially off-limits to American interests.

Moore is calling attention to the fate of the singleton copy. Obviously, without other instances of it, we’d have little to no chance to take-in what it portrays (one’s taste for whether that’s good or bad is not the point). Rather the politics and special interests that might prevent a copy from being exposed to the public or perpetuating itself are constantly at work as sister forces to the destruction brought by time.

As the article reports, Moore’s film (perhaps not by his own intention) survives this fate through the Internet’s means of replication. Digital media, with its special capacity for being copied and distributed (even through artificial boundaries) prevent the film’s disappearance. Once it’s free and the interest is there, the information gets propagated, surviving forces that would otherwise erode it in the waters of Lethe.

As an aside, my last post on mass replicability along with this one are carrying me toward a larger point. I’m slowly working on it. Actually I think it has something to do with Heidegger.

Mass Replicability

An unfinished thought on mass replicability (I may have just made up that word), here it is, I’m going to take note and continue later. Living in an age of digital media and means, do we have an imperative to make as many copies of the information, cultural artefacts, algorithms, etc., which we store in this medium, as possible? Must we mass replicate all our digitally stored leavings?

I’ve been chatting (err e-mailing) with my friend, Chris, about his concern with digital cultural amnesia. This came via his collection of old modem protocols and BBS doors. He brought up the point that people have to be able to remember how to access old data, even when we have the ability to emulate older software. This is a different problem from what I started this post with, but it is related so I’ll come back to it in a moment.

See, I’m thinking (and this is by no means a new problem) that even if digital media don’t really decay nicely like analog media but rather, just give their storage bounty an all-or-nothing effort, their saving grace may be the ease with which we can make replicas of whatever is digital. Perhaps if we focus like mad on making as many copies of every digital thing that we can, we’re making progress on extending the digital archival value for the future, in spite of its lack of graceful analog-like decay.

Even if that’s the case, then we still need, as my friend pointed out, people that can figure out how to access it or use it. I imagine there may be a day when you could hand someone a floppy disk or DVD ROM and the person will have no clue that it might even be a storage medium. Maybe at that point in time, computers as we think of them won’t even exist. All would be lost.

That’s a pretty bad case. Perhaps that isn’t a problem if we obsessively copy all digital media to every new medium, in as many instances as possible. But it doesn’t change the access issue. If our person in the future has our artefacts stored digitally, what’ll he do with them? He’ll need some understanding. We need to find ways to ensure that we also pass along our savoir-faire. And that gets me to thinking about free and open source software, where the code is accessible along with everything else. Perhaps one of the more important aspects of FOSS is what it may build for our future. In propagating the freedoms, such as those laid out in the GPL, maybe the most important significance is that it makes mass replicability possible. The necessary liberation for copying the digital is enabled, while the means to access what is copied are encouraged.

As I said, I’ll have to spend some time working on this thought. What makes copying pragmatically necessary? If we hope to preserve humanity’s wisdom and culture for our future at all, can we argue that we have an imperative to copy? Perhaps people that never delete, P2P, and “pirates” are the next bogman, Library of Alexandria, or papyrus.