Many Mac users will already be familiar with Rosetta, the driver layer on Mactels which lets them run old, PowerPC versions of their software without having to recompile or reobtain new universal binaries.
However, I was doing some reading into more 'universal' processor translation software, and a browse of The Icon Bar led me to a reference of QEMU, "an open source processor emulator". It's still pretty much in beta status (the Windows version is still in alpha), but it already supports x86, ARM, SPARC, PowerPC and MIPS translation for both user AND system emulation as the Target PC (the one being emulated), and it can run on x86 (32- and 64-bit), PowerPC and ARM/S390/Alpha/Sparc32 (testing... Think VERY alpha), with more in development. All the info is on the project's status page.
However unfinished it might be in its current state, this software is very cool indeed! Imagine being able to run not only old games or software on your brand new machine, you could also run a whole different OS, natively, along with all the associated benefits of doing so... I can see that there was great wisdom in those people who hacked OSX into functionality on x86 boxes (and subsequently, Apple developing its Boot Camp software). To do all this, but over a variety of OSes and core CPU languages - effectively making your computer language-agnostic! - is what QEMU is trying to do. It's quite an achievement that they've gotten to where they already, and I fully applaud their efforts.
Anyway, this is the perfect piece of software for those who enjoy testing new apps and 'living on the edge' as such - a rough and ready program which isn't at all finished, might fall over on you and royally mess up your computer, but already shows great promise based on what it can do. Personally, I like a little danger in my computer (because it's all usually so boring and predictable!)
For more information, and to download, head over to the QEMU site (where you'll also find versions for Windows, OSX and OpenSolaris, along with emulation examples and disk images).
And, as a postscript, here's something I bet you never knew: the BBC once compiled a massive encyclopedic database of the United Kingdom, and a social record of life in the 1980s; it was entitled the BBC Domesday project. It was distributed on laserdisc, ran on a BBC Master system in a Philips SCSI (SCSI!) laserdisc player, and was extremely cumbersome. Nevertheless, the project holds a soft spot in many peoples' hearts (including mine), because it was a sign of just how clever people could be to make this whole thing come together, even with the limited technology available in the 1980s. Read more about it on The Icon Bar.
The BBC Master system evolved into the Archimedes Acorn platform, and their machines were gorgeous bits of kit. Acorn are now a PC OEM, with the same logo (and a rebranding), but it's just not the same. Acorn computers used to be ubiquitous throughout UK primary and secondary schools, but they set their price points too high and mismanaged their marketing strategy, and that, combined with the upsurge in DOS-based machines made them sink under the waves. I love Acorns, my first computer was an Acorn A3000, and they were incredibly ahead of their time components-wise (RISC chips, OS held on-chip but with user variables stored on hard drives in the more expensive models, and in the last RISC PCs Acorn built before being sold due to financial problems, you could even emulate an x86 processor and run Windows 95!)
The BBC Domesday is the perfect candidate project for native-mode emulation - to be able to resurrect the data and have it available for all to access, which is easy enough to do once you've extracted all of the raw data and recompiled it, but to see it - the drive, the software, everything - running in a BBC Master system emulation, functioning as it would have done natively - has long been a dream of mine. Hopefully one day I'll get that chance.
I'm as big a big fan of HD content as the next geek, and I've found (as have many others) that on my PCs, along with AC3Filter and the Haali Media Splitter running the show for my decoding, CoreAVC is the most effective and efficient video codec for H.264/AVC content. However, in the CoreCodec discussion forums there's been lots of discussion about delays and widespread reports of bugs in the new version (1.3.0.0), including lack of playback once it's installed, incompatibility with Vista, and the inability to roll back to 1.2 once 1.3 has been installed - even if it's uninstalled.
The CoreCodec site, which has an online portal system for those who have purchased the codec (including personalised download links for the installers) is also in a state of hiatus, with all purchasing facilities suspended until the new versions of their most popular software - Coreplayer Mobile (formerly TCPMP, which I still use on my Vario 2) and CoreAVC - are bugfixed and compiled.
The CoreCodec developers have also commented several times on their forums that once the bugs are sorted out, 1.3.0.1 will be made available for purchase and everything will return back to normal, but the delays are increasing and feedback from the developers is less than forthcoming.
It's a shame, because CoreAVC is a really cracking codec - its performance on less-than-brand new systems is noticeably superior when put up against its main competitors, and all the processing is done in software as well - and I hate to see this small bit of bad press affect it. I have every hope that CoreAVC 1.3.0.1 will be released soon, as the developers promise. When it does I will be suggesting to the developers that they keep a closer eye on their (if small, but very active and vocal) community residing on their forums and on other forums like Doom9 so as to revise their code and fix problems much sooner. They are usually pretty good when it comes to fixing problems for next release, and their community acknowledges this, but equally this whole issue could also well be ascribed to taking a product out of beta a little too soon.
Conversely, Google loves its 'beta' tag, and applies it to just about everything (I'm still a little amazed that Google Search still isn't labeled as beta, as they're continually adding new features) - it raises an interesting question, too: given the possibility for keeping a product in continuous beta status, and releasing updates thick and fast as problems occur, is that a better practice for developers than to work a long time on fixing every problem they can foresee, releasing it to the masses then firefighting when a bunch of users come along with a new, unforeseen problem? The latter is what appears to have affected CoreCodec's product, but the former situation (thinking of Google again) also has the side-effect of devaluing the 'beta' status, and, in effect, merging together the beta and RTM levels of development until both terms have little relevance to their original meanings.
Maybe it's time for developers to rethink their usage of beta as a terminology for defining "it works, but it's not properly finished yet... But what the hell everyone, just use it anyway and don't bitch if it breaks on you", because it also - to me anyway - devalues the usefulness of effective Quality Control testing by a controlled group of testers. Every good product (should) have QC and QA performed on it at every major developmental stage, and usability tests to boot, otherwise you end up with a product which has been designed engineers without a thought paid to how regular users are going to use it.
The software which runs on 90% of the UK's checkout EFTPOS terminals (tills to you and me) is a prime example of software developed by developers without it ever having even a once-over from a usability engineer. That's possibly the most important stage of a software's beta cycle, and it's being neglected in this increasing trend of Web 2.0 "unfinishedness".
In fact, I think the beta tag should be dropped from many of the programs that use it, and replaced with a term like "almost" - that'd encourage developers to get it finished and out the door in a properly working state much faster than they do currently. Tagging something 'beta' almost encourages slower development, which isn't always good - last time I did something like that, I ended up spending weeks and weeks on it and continually losing my train of thought, which slowed down development as a whole and meant that I also lost some good ideas which I'd had but not started to work on.
Beta's not always a good thing to call something. Maybe beta-with-next-stage-deadline would be more appropriate for the majority of live, in-development projects I see on the web these days...
Tags: beta, betatestblog, bugs, comment, development, Quality Assurance, Quality Control
Where we once had
we now have
As of the 19th of April - two whole days ago, how did I miss this! - Froogle has been rebranded to Google Product Search. All the old URLs to Froogle still work though (even the one which only idiots ever went to), and that's how I'll keep on using it (frankly, Froogle was a much cleverer name in my opinion).
This also looks to be in direct contrast to the rumours that Froogle was to be wound down, with various rumours in the tech news now looking unfounded (and falling in line with kinda subtle confirmations that the Froogle shop wouldn't be shut for the foreseeable future, but various people interpreted the quite cryptic Google responses in their own ways). I'm glad to see it continue, even if under a new banner, because it's a really useful facility for consumers - even if it is a royal pain in the arse for shop-owners to add their products to.
This namechange is also in line with Google's rebranding, and re-announcement, of its Search History to Google Web History. Now out of beta, Web History (once turned on in the control panel) shows you detailed statistics and results for every search you've performed since monitoring began - in my case, back to May last year. As I don't keep my Blogger account signed in all the time (as I have two blogger accounts, one for my blogs and one for gmail) it's nigh-useless to me because the results are full of holes, but it's really interesting to look back and see what you surfed.
Tags: betatestblog, google, name change, product search, rebrand, web history
[To hit the mainstream] RSS has to become brain dead simple to use.
- Fred Wilson
Particls is well on the way to managing this, and this isn't something I say lightly.
The above quote was the first immediately-arresting thing I saw when I fired up the Particls web site a few weeks ago. Mind you, until recently (the 13th) it wasn't called Particls, it was called Touchstone.
"What the hell is Particls?" Good question. Particls bills itself as "personalised aggregation", or, in a sentence, automated aggregation which solicits your initial input to refine its scope, and then goes off and collates appropriate news.
Personalised aggregation IS snappier, come to think of it.
It's a desktop client, which, once installed and given a list of keywords to look for, just... Goes off... On its own... And reads the news. Sure, once it's read the news it then assesses it for relevance and then feeds back to you, humble user, providing you with relevant and (hopefully) interesting news articles from a cornucopia of feeds. Indeed, the Particls blog mentions such terms as "Personal Relevancy or Attention Management" to describe Particls (this is during a blog post discussing how Particls is not Web 3.0 which I found most amusing). The idea is that it (eventually) integrates with your calendar, email and web through a series of modular interfaces, displaying news items in popup alerts on your desktop which fade in and out. You have the option to close them sooner or sticky them with little icons in each popup.
This is what a popup looks like:
And this is how you define what you want it to keep a look out for - the Watchwords / Blacklist lists.
Watchwords
Blacklist (well, I had to put something to demonstrate, usually it's blank)
All this and more is accessible via one small tray icon - the menus are pretty well laid-out, so nipping round the options isn't much of a hassle.
You might think "pfft, this is a gimmick, unnecessary software which does little for me," but it DOES work! I just leave it going, if I'm busy and don't want to be disturbed I can set it either to a 'Low level of interruption' or define a custom level where I can choose individual thresholds for the news ticker, pebbles and popup alerts (or I can disable each one altogether, individually). That way I can avoid unnecessary procrastination when I'm trying to write my essays for Uni! The rest of the time, I just leave it on low or medium depending on how much I want to drift off whatever I'm up to at the time (because usually reading one interesting thing leads me off on a voyage of discovery around the web, usually spending 40 minutes or more reading around the subject). Particls achieves the best of both worlds: it helps me keep current with stuff I find interesting whilst not having to trawl around and manually go through RSS feeds looking for related entries to my interests.
There's a great deal of customisation available; you can define an 'interruption level' via the tray icon, and depending on the threshold, you can have a ticker along the top or bottom of your screen which shows news as it comes in, or you can just have occasional popups of relevant information.
There's also a very cool Web 2.0 interaction possibility with Particls: the Pebbles. By default, the Pebble adapter is disabled, but once enabled, it's like publishing your own "what I'm interested in" feed without having to manually update it all the time. Here's how it works:
The Particls Pebbles Adapter and Publishing Service is, by its very nature, designed to publish your Particls Item Stream to a publicly accessible hosted RSS feed. For this reason the Pebbles Adapter is turned off by default. Please note that opting into using Pebbles does make it available via the internet via an open RSS feed - we may add authentication later.
The main Particls Pebbles page has more information and suggestions on how you can use your own personalised RSS uberfeed, but this paragraph I feel highlights its usefulness in making 'your own personal Pebble':
Particls allows you to subscribe to information you care about, it learns what your interests are keeps you informed while you work. Pebbles makes Particls even more powerful by aggregating and syndicating your filtered Particls items to your very own RSS feed.
That's pretty cool by any stretch of the imagination! I do the work in the first place, the software goes off and checks out all that's available, sifts the chaff from the good stuff, and presents the end results to me for my perusal. Truly, this is RSS news how I've been wanting it to be for a long time now: just for me - all killer, no filler!
(If you got that Sum 41 reference... Well done... I think
I have 10 - yeah, ten! - invites to give away for Particls if you're interested in joining the small community of beta testers for this project. Just add your request (with an email address) to the comments page for this entry, and I'll fire you off an invite lickety split. Sharing is caring :)
Tags: aggregation, automated, beta, betatestblog, particls, personaliation, preview, private, rss, web 3.0
So, what's this then? I found yet ANOTHER reference to a site whilst going through my site logs, this time to this IP - which also resolves as assista.com. There's no info, no pages, quite literally nothing on the web about it. What's it all about?
I'll be keeping my eyes peeled and ears pricked for news on this, it could be quite intersting. I wonder if it's anything like FreeIQ, which bills itself as 'The Marketplace For Ideas' (and which I signed up to earlier this week)? If so, that'd make it the third knowledge-based community site I've seen pop up in as many weeks!
Tags: assista, betatestblog, freeiq, ghost site, mystery site, private beta
topicblogs/0.9 (Or, TopicBlogs: vapourware or a potentially great service?)
2 comments at Saturday, April 21, 2007I just had that awful sinking feeling.
You know, the one where you think, "well, with hindsight I should have done a bit more investigating before I put my email address and name into a box and hit Submit on a webpage which has nothing more than a logo and the aforementioned form."
What the hell am I on about, you may ask? Well, a site called TopicBlogs. The site has nothing on it, and doesn't seem to have had anything on it for a while - maybe a startup which was shuffled onto the back burner, maybe something which is still in private testing stages, maybe something which is almost ready to be unleashed into the blogosphere.
I did some Googling, and found mixed information: there's a Jeff Chan listed next to topicblogs.com on a TechCrunch wiki page entry for a 2006 meetup in California - which would therefore logically imply that Jeff Chan is California-based, most likely in the Silicon Valley / Bay Area and working at a startup or incubator company. That said, maybe Mr. Chan was developing an idea on that domain, got bored and let it expire, and it was purchased to use for other more nefarious means. Maybe it's a legit site but is unfortunate to be hosted on a known-unscrupulous webhost (Layered Tech)...
...Maybe it's just a ruse to get people to put their names and email addresses into a database to get spammed, who knows (all I know is I saw a bot with that name spider my RSS feed). Whilst I write this I'm doing some parallel Googling to see if I can find anything more, and it looks like this might be a complex system devised by someone to crawl sites looking for Wordpress / TypePad sites, and then attempt to abuse the comment posting scripts to send them loads of spam. I found some fairly convincing evidence on PlanetMike's blog, and the most unfortunate thing is that that entry is only dated from the beginning of this month.
Oh well, there goes my address into the wild west of yet more spamlists (I already get a ton of spam thanks to my address being leaked onto the web via leaky mailing list archives). I also note that The Web Robot Abuse Blog notes that topicblogs is hosted on .layeredtech.com, which they already blacklist for known spam operations.
Darn it, why aren't I more careful with my email addresses sometimes? Oh well. Don't make the same mistake I did! Maybe it'll turn out alright on the night, maybe it won't. Stay tuned to see what happens.
There may be a ray of hope though (and I'm an eternal optimist); see what Nicholas Seow wrote in one of his blog entries, way back in 2005:
Apparently this crawler is linked to ping-o-matic, and is notified when you give ping-o-matic a RPC. This obviously means that topicblogs is some sort of blog-indexing service, or maybe a new social network service. It certainly gives the impression of a startup which has some sort of innovative idea they’ve yet to unveil to the world.
http://thom.org.uk/blog/Topicblogs.aspx has shown that topicblogs is somehow related to http://www.backweave.com/blog/, or Jeff Chan if the profile information is accurate. Yep, this definitely seems like a startup - a one-man mission, in fact.
Checking up on both the domains of topicblogs.com and backweave.com, we can surmise that both are owned by the same person, Tze Yi Chan, or Chan Tze Yi if we follow the Chinese naming sequence. No doubt this is the same person as Jeff Chan.
Well, I’ve been unable to find any more information, so I may just email them out of curiosity. Good luck to Jeff Chan in his endeavour, and his bot is welcome to crawl over all over the place here at propitiate.net, as long as I don’t find out anything nasty about it.
This information tallies with some of the other stuff I've already found myself, so who knows? One of life's great mysteries.
Perhaps it's just a big social experiment to see who responds to the feed scrapes and puts their address into the box... Who knows?
More references:
Thom.org.uk - topicblogs
Propitiate.net
LabNotes - Who's reading the blogosphere?
TechCrunk Wiki: techcrunh7 info (search page for Jeff Chan)
(Yeah, I know most of these articles are quite old, but that almost adds to the mystique... Such a long time without any real revelations... What could the site be? Hmmmmmmmmmm.)
Tags: betatestblog, harvester, mystery, suspicious, topicblogs, web spider
Onfolio/3.0 Beta 2 ... (or, OnFolio: collaborative research and newsreading)
0 comments at Saturday, April 21, 2007I was going through the server logs on one of my (pretty much unadvertised) blogs (uniblog.co.uk) when I started noticing a LOT of hits from bots all calling themselves "Onfolio/3.0 Beta 2".
Since I'd never heard of this, I went exploring... Turns out it was a maturing web service which seems to aggregate online content and research which was recently acquired by Microsoft. From their site,
Onfolio is an add-in for the Windows Live Toolbar that helps you collect and organize online content, read RSS news feeds, and share content in emails, blogs and documents. With Onfolio, you get all of these tools built into your browser for simplicity. Whether you are planning a trip, looking for a job, investigating a major purchase, or simply looking for a better way to keep up with the news that interests you, Onfolio will help you be more efficient, thorough and organized.
From what I see, there's either existing or planned development of:
- Blogging and commenting integration
- News aggregation via XML/RSS
- Collaborative researching and sharing of information
- Creating an archive of files, research and other miscellanea for retrospective review
- Tight(er) integration into the browser to complement standard searching
... and more besides. There's loads of info on the OnFolio site.
I've had loooooooads of hits from various IPs in various subnets, but all with reverse DNSes ending in .as9105.com - one which I recognise from past experience as a Freeserve-now-Tiscali AS.
So, is it just pure coincidence that loads of (only) Tiscali users are using the OnFolio IE Search Toolbar plugin, or is there something else going on? All the IPs I checked out resolved to dynamically-assigned IPs in the ADSL ranges (194-247-239-150.dynamic.dsl.as9105.com [194.247.239.150] and 212-1-142-110.dynamic.dsl.as9105.com [212.1.142.110] are two examples) but for the life of me I can't quite work out why there's no requests from anywhere else aside from Tiscali subnets!
Anyway, this tool looks quite nifty - all the requests have been for my XML feeds, which I assume ties in with the ability of the software to aggregate news via RSS (go figure) but still, I'd not really heard about this at all and now all of a sudden looks like they're gearing up to a larger webcrawl to get their content together. And if they're spidering my little old web sites... They must be nearing out-of-beta.
So, go check out OnFolio - now owned by Microsoft (hiss) but it still looks fairly cool to use if you're into your inline toolbar widgets. There's also an OnFolio Group Blog if you're thusly-inclined, I'm pretty sure if you had any more questions about the service or where it's headed, they're all answered on there.
Tags: aggregation, betatestblog, microsoft, news, onfolio, online, reports, research