Skip to content

D’Arcy Norman dot net

no more band-aids

  • About
    • Colophon
    • Presentations
    • Publications
  • Archives
    • Activity
    • Asides
    • Categories
      • aside
      • coursework
      • ephemera
      • fun
      • general
      • personal
      • photography
      • work
    • Coursework
    • Shared Items
  • Contact
  • PhD Notes
  • Photos
    • Gallery
  • Projects
    • Taylor Institute for Teaching and Learning
    • PhD notes
    • Reclaiming Educational Technology
    • MSc Thesis
    • WordPress
      • Akismet Credit Inserter
      • Blog Members Directory Shortcode
      • Ephemerator
      • Important Links Widget
      • Post Revision Display
      • Tag Cloud Shortcode
      • User Info Login Shortcode

Tag: metadata

geotagging in Aperture with Maperture

I’ve been geotagging many of my photos on Flickr, but it’s always bugged me that the geolocation metadata was not available in my Aperture library – geotagging only happened after posting photographs to Flickr, and that metadata was essentially lost from my library.

That just changed. Now I’m using the awesome new Aperture geotagging plugin Maperture, adding latitude and longitude data directly within Aperture before uploading to Flickr etc… That means I get to keep my metadata.

Here’s what the Maperture metadata entering screen looks like:

geotagging in Aperture with Maperture
geotagging in Aperture with Maperture

and once posted to Flickr, the geotagging data is still available:

displaying the geotagged data from Flickr after posting
displaying the geotagged data from Flickr after posting

And, thankfully, the coordinates seem to match up pretty closely. I’d tried using Google Earth via the Flickr Export plugin for Aperture to add the geotag data before, and there was a mismatch when viewed on Flickr. Maperture seems to work great so far!

Author D'Arcy NormanPosted on August 13, 2008Categories generalTags aperture, Flickr, geolocation, geotagging, metadata, Photography9 Comments on geotagging in Aperture with Maperture

Digital Albums as Content Packages

I had a quick IM chat with David Gratton last week, when he was asking me what I thought of content package specifications. My initial from-the-hip reaction was along the lines of "gah! metadata for metadata's sake" and that just getting content Out There was the goal, not encapsulating it in layer after layer of helpful metadata.

Then we spent a couple of minutes hashing it over. If there's a requirement that a set of content needs to be ingestable in a system, a package begins to make sense. A system then only needs to know how to ingest stuff that meets a given specification, and all kinds of workflow opportunities open up. I'm skeptical about the benefit to the end user (students, teachers, etc…) but the value to the Institution (or higher) is undeniable.

Then, David writes a blog post this morning, where it all becomes clear. Content Packages are really a way for content producers to bundle up various bits that make up the experience of interacting with their content. The indivdual bits of content, the metadata that describes each one, the metadata that describes various paths through it, interfaces to present the content to the user, potentially code that interacts via an API to communicate with other systems and users, etc… 

David is approaching from the angle of the music industry, specifically through the awesome Project Opus. Content Packages as replacement for the dying CD industry (bits are cheaper than atoms). The XIPF project (Extensible Interactive Packaging Format) will be building on MPEG 21 to define ways to share content experiences (albums, etc…) and they're planning on working with the education community so it's not just about building the next 8 Track specification.

If this works out, when you buy a digital album, instead of simply getting a set of tracks and maybe embedded cover art, and maybe a PDF of the liner notes, you'd get an XIPF package containing the full experience (tracks, cover art, liner notes, lyrics, embedded interfaces to community features, etc…) all in one shot. It'd be cool to see Apple get on board so when I buy albums on iTMS it comes in a standard format, as they will from Opus, et al.

It's interesting that the XIPF wiki doesn't mention either IMS CP nor SCORM as existing models, but a fresh start with an extensible model from the ground up will be nice anyway. Hopefully there will be some form of interoperability between the camps. 

So, if I look at content packaging as more of an experience than simply as a "content cartridge" then it makes more sense. 

I had a quick IM chat with David Gratton last week, when he was asking me what I thought of content package specifications. My initial from-the-hip reaction was along the lines of "gah! metadata for metadata's sake" and that just getting content Out There was the goal, not encapsulating it in layer after layer of helpful metadata.

Then we spent a couple of minutes hashing it over. If there's a requirement that a set of content needs to be ingestable in a system, a package begins to make sense. A system then only needs to know how to ingest stuff that meets a given specification, and all kinds of workflow opportunities open up. I'm skeptical about the benefit to the end user (students, teachers, etc…) but the value to the Institution (or higher) is undeniable.

Then, David writes a blog post this morning, where it all becomes clear. Content Packages are really a way for content producers to bundle up various bits that make up the experience of interacting with their content. The indivdual bits of content, the metadata that describes each one, the metadata that describes various paths through it, interfaces to present the content to the user, potentially code that interacts via an API to communicate with other systems and users, etc… 

David is approaching from the angle of the music industry, specifically through the awesome Project Opus. Content Packages as replacement for the dying CD industry (bits are cheaper than atoms). The XIPF project (Extensible Interactive Packaging Format) will be building on MPEG 21 to define ways to share content experiences (albums, etc…) and they're planning on working with the education community so it's not just about building the next 8 Track specification.

If this works out, when you buy a digital album, instead of simply getting a set of tracks and maybe embedded cover art, and maybe a PDF of the liner notes, you'd get an XIPF package containing the full experience (tracks, cover art, liner notes, lyrics, embedded interfaces to community features, etc…) all in one shot. It'd be cool to see Apple get on board so when I buy albums on iTMS it comes in a standard format, as they will from Opus, et al.

It's interesting that the XIPF wiki doesn't mention either IMS CP nor SCORM as existing models, but a fresh start with an extensible model from the ground up will be nice anyway. Hopefully there will be some form of interoperability between the camps. 

So, if I look at content packaging as more of an experience than simply as a "content cartridge" then it makes more sense. 

Author D'Arcy NormanPosted on February 5, 2007February 5, 2007Categories UncategorizedTags contentpackaging, IMS, metadata, XIPF1 Comment on Digital Albums as Content Packages

Oh, good! More metadata specifications!

The problems with the adoption and implementation of the previous versions of the LOM are apparently solved by the addition of more definitions of structured taxonomy-driven authoritative metadata systems.

I’m posting this to remind myself to not get sucked into this stuff. It’s good that people are thinking about how to improve on the LOM, and even deprecating the term “learning object” (replaced by “resources”) but for the love of all that is holy and good, please focus on the content, context, and pedagogy and not on the metadata.

Whew.

The problems with the adoption and implementation of the previous versions of the LOM are apparently solved by the addition of more definitions of structured taxonomy-driven authoritative metadata systems.

I’m posting this to remind myself to not get sucked into this stuff. It’s good that people are thinking about how to improve on the LOM, and even deprecating the term “learning object” (replaced by “resources”) but for the love of all that is holy and good, please focus on the content, context, and pedagogy and not on the metadata.

Whew.

Author D'Arcy NormanPosted on January 30, 2006Categories UncategorizedTags learningobjects, metadata1 Comment on Oh, good! More metadata specifications!

Learning Objects: RIP or 1.0?

David Wiley just wrote an excellent post about the “death” of learning objects. He’s right on the mark, emphasizing the learning part of the buzzword, while us geeks who were attempting to implement some of the early LO-based software got so woefully distracted by the object and reuse angles. He’s also much more articlate than I am, so give his article a read, then come back here. I’ll wait. Go ahead.

OK. You’ve read his post. Good, eh? Now, I just wanted to add some thoughts from the perspective of a “learning objects” software developer (I was rather involved with the development of CAREO, which has apparently been championed as one of the early Learning Object Management Systems).

I was as guilty as anyone, if not moreso. CAREO was intended to provide a central clearinghouse of these magically reusable bits of buzzword compliant digital goodness. I was sucked into the hype, along with an entire generation of implementors. We had an entire nationally funded project (EduSource) with the goal of working out the plumbing problems to get these wondrous Learning Objects flowing. As geeks, that’s all it was – a plumbing problem. All we had to do was hook a few things together, attach an input or thirteen, throw a switch, and revel in the magical incredibleness that would Just Happen Because We Built It.

And, of course, outside of carefully scripted demos, nothing really happened. EduSource sort of dissolved. CAREO continued to operate, sortof, but without any financial or institutional support. There are still some users of the system, but it’s basically running as a snapshot. A postcard from 2002.

Was CAREO a failure, then? I’d argue an emphatic “absolutely not, bucko!” because it served (and continues to serve) a crucial role. Before CAREO, there wasn’t a solid, concrete example that we could all point to and say “there’s learning objects!” We didn’t have a testbed, a sandbox, a lab. Through CAREO, and an entire generation of “learning object management” software, we learned a heck of a lot about the concept. We were right sometimes (metadata should be as transparent as possible, people to want to share stuff…) and we were wrong sometimes (the UI as a thin veneer over the database, overemphasis on metadata specifications and interoperability…). But we learned.

Also, I get the feeling that the Learning Objects Movement was just a few years ahead of itself. Now, social software is oozing out of the woodwork. Tagging and folksonomies are pushing metadata into every corner of the networks. Mashups via “Web 2.0” web-application-API layers are amplifying and exposing network effects to connect and layer sources of information that were previously relegated into locked silos.

Personally, I learned a very valuable lesson that can best be distilled into Ward Cunningham’s description of the original wiki software:

The simplest online database that could possibly work.
– Ward Cunningham

I used to have a version of that written in big block letters across the top of my whiteboard.

It’s something that was essentially ignored by all of us Early Learning Object Implementors. We wound up with insanely complicated data schemas (have you ever looked at the full IMS/IEEE LOM?) and attempted to find elegant ways to store the XML directly in databases (before XML-in-databases was in vogue). We came up with these funky national networks of unique and distinct flavours of webservices, so we could share our overly complex data. We invented new, innovative and cool ways of connecting these systems.

But, we completely lost sight of the simple fact that the reuse that is important. and actually much more difficult, is the pedagogical use of content and not a futile pursuit of technical interoperability. I suggest that learning objects are not dead. Far from it. New ideas like implementations of the semantic web, and structured blogging, and social software for creating and sharing resources – they all combine to breathe new and fresh breath into the concept of the learning object. But, with the ability to place the emphasis on learning rather than object.

I’ve got a nagging feeling that the whole buzz over ePortfolios is following a familiar path. Which is why I’m choosing to ignore the buzz on that topic and play with some of my own ideas.

Whew. OK. That’s off my chest. Albatross released. Monkey off of back. Thanks to David for the cognitive nudge required.

David Wiley just wrote an excellent post about the “death” of learning objects. He’s right on the mark, emphasizing the learning part of the buzzword, while us geeks who were attempting to implement some of the early LO-based software got so woefully distracted by the object and reuse angles. He’s also much more articlate than I am, so give his article a read, then come back here. I’ll wait. Go ahead.

OK. You’ve read his post. Good, eh? Now, I just wanted to add some thoughts from the perspective of a “learning objects” software developer (I was rather involved with the development of CAREO, which has apparently been championed as one of the early Learning Object Management Systems).

I was as guilty as anyone, if not moreso. CAREO was intended to provide a central clearinghouse of these magically reusable bits of buzzword compliant digital goodness. I was sucked into the hype, along with an entire generation of implementors. We had an entire nationally funded project (EduSource) with the goal of working out the plumbing problems to get these wondrous Learning Objects flowing. As geeks, that’s all it was – a plumbing problem. All we had to do was hook a few things together, attach an input or thirteen, throw a switch, and revel in the magical incredibleness that would Just Happen Because We Built It.

And, of course, outside of carefully scripted demos, nothing really happened. EduSource sort of dissolved. CAREO continued to operate, sortof, but without any financial or institutional support. There are still some users of the system, but it’s basically running as a snapshot. A postcard from 2002.

Was CAREO a failure, then? I’d argue an emphatic “absolutely not, bucko!” because it served (and continues to serve) a crucial role. Before CAREO, there wasn’t a solid, concrete example that we could all point to and say “there’s learning objects!” We didn’t have a testbed, a sandbox, a lab. Through CAREO, and an entire generation of “learning object management” software, we learned a heck of a lot about the concept. We were right sometimes (metadata should be as transparent as possible, people to want to share stuff…) and we were wrong sometimes (the UI as a thin veneer over the database, overemphasis on metadata specifications and interoperability…). But we learned.

Also, I get the feeling that the Learning Objects Movement was just a few years ahead of itself. Now, social software is oozing out of the woodwork. Tagging and folksonomies are pushing metadata into every corner of the networks. Mashups via “Web 2.0” web-application-API layers are amplifying and exposing network effects to connect and layer sources of information that were previously relegated into locked silos.

Personally, I learned a very valuable lesson that can best be distilled into Ward Cunningham’s description of the original wiki software:

The simplest online database that could possibly work.
– Ward Cunningham

I used to have a version of that written in big block letters across the top of my whiteboard.

It’s something that was essentially ignored by all of us Early Learning Object Implementors. We wound up with insanely complicated data schemas (have you ever looked at the full IMS/IEEE LOM?) and attempted to find elegant ways to store the XML directly in databases (before XML-in-databases was in vogue). We came up with these funky national networks of unique and distinct flavours of webservices, so we could share our overly complex data. We invented new, innovative and cool ways of connecting these systems.

But, we completely lost sight of the simple fact that the reuse that is important. and actually much more difficult, is the pedagogical use of content and not a futile pursuit of technical interoperability. I suggest that learning objects are not dead. Far from it. New ideas like implementations of the semantic web, and structured blogging, and social software for creating and sharing resources – they all combine to breathe new and fresh breath into the concept of the learning object. But, with the ability to place the emphasis on learning rather than object.

I’ve got a nagging feeling that the whole buzz over ePortfolios is following a familiar path. Which is why I’m choosing to ignore the buzz on that topic and play with some of my own ideas.

Whew. OK. That’s off my chest. Albatross released. Monkey off of back. Thanks to David for the cognitive nudge required.

Author D'Arcy NormanPosted on January 9, 2006May 25, 2011Categories generalTags careo, development, hosting, identity, learningobjectrepositories, learningobjects, metadata, Noteworthy, silo7 Comments on Learning Objects: RIP or 1.0?

Structured Blogging: Semantic web for the rest of us?

Structured Blogging

Year: 2005

Author: The Structured Blogging Folks

Platform: Other

Category: Utility

Publisher: structuredblogging.org

Price: Free!

Rating: 5 out of 5

I’ve been playing with the Structured Blogging plugin for Wordpress for a while now, and just noticed a new version – it’s almost up to the mythical “1.0 release”. They’ve added a bunch of new microcontent types with some great structured metadata appropriate to each type. I’m planning on using structured blogging a lot more in the future.

From the Structured Blogging project website:

Structured Blogging is a way to get more information on the web in a way that’s more usable. You can enter information in this form and it’ll get published on your blog like a normal entry, but it will also be published in a machine-readable format so that other services can read and understand it.

Think of structured blogging as RSS for your information. Now any kind of data – events, reviews, classified ads – can be represented in your blog.

Structured Blogging makes it easy to create, edit, and maintain different kinds of posts and is very similar to an edit form on a blog. The difference is that the structure will let users add specific styles to each type, and add links and pictures for reviews.

So, it’s an easy to use, flexible way of describing some standard types of things. People. Places. Events. Things. And the metadata is machine readable, enabling some of the early promise of the federated “repositories” by letting people search for stuff anywhere, and find relevant bits easily. The first bits of readily usable semantic web infrastructure.

Here’s a screenshot of the structured blogging microcontent authoring interface for Audio:
Structured Blogging Audio form

There is also a plugin available for MovableType users, if you happen to swing that way *cough*Brian*ahem*

What would be really cool is if a new microcontent type of “learning object” was defined – letting you enter some IEEE LOM-ish metadata about a resource that’s used as a learning object. There’s your learning object repository, thank you very much…

Structured Blogging

Year: 2005

Author: The Structured Blogging Folks

Platform: Other

Category: Utility

Publisher: structuredblogging.org

Price: Free!

Rating: 5 out of 5

I’ve been playing with the Structured Blogging plugin for WordPress for a while now, and just noticed a new version – it’s almost up to the mythical “1.0 release”. They’ve added a bunch of new microcontent types with some great structured metadata appropriate to each type. I’m planning on using structured blogging a lot more in the future.

From the Structured Blogging project website:

Structured Blogging is a way to get more information on the web in a way that’s more usable. You can enter information in this form and it’ll get published on your blog like a normal entry, but it will also be published in a machine-readable format so that other services can read and understand it.

Think of structured blogging as RSS for your information. Now any kind of data – events, reviews, classified ads – can be represented in your blog.

Structured Blogging makes it easy to create, edit, and maintain different kinds of posts and is very similar to an edit form on a blog. The difference is that the structure will let users add specific styles to each type, and add links and pictures for reviews.

So, it’s an easy to use, flexible way of describing some standard types of things. People. Places. Events. Things. And the metadata is machine readable, enabling some of the early promise of the federated “repositories” by letting people search for stuff anywhere, and find relevant bits easily. The first bits of readily usable semantic web infrastructure.

Here’s a screenshot of the structured blogging microcontent authoring interface for Audio:
Structured Blogging Audio form

There is also a plugin available for MovableType users, if you happen to swing that way *cough*Brian*ahem*

What would be really cool is if a new microcontent type of “learning object” was defined – letting you enter some IEEE LOM-ish metadata about a resource that’s used as a learning object. There’s your learning object repository, thank you very much…

Author D'Arcy NormanPosted on December 14, 2005Categories generalTags metadata, plugins, semanticweb, structuredblogging14 Comments on Structured Blogging: Semantic web for the rest of us?

Pachyderm Asset Transformation Dilemma

The Pachyderm project uses jGenerator to wrap images in a flash .swf container for display in the final product. That process does a few things that are pretty handy:

  • Makes loading the images into flash easy – it’s just loading more flash…
  • Lets us embed metadata in a “tombstone” display field, much like the cards displayed in a museum. These tombstones travel with the asset, and can be displayed automagically wherever appropriate.
  • Provides a lightweight DRM – the images are useless outside of the finished Pachyderm presentation (unless you’re able to decompile flash, or take screenshots) – it’s not an overbearing DRM, just a way to make it easy to be honest.
Pachyderm Tombstones

(left) tombstone in “closed” state. Click the arrow widget dealie to expand it to view the full tombstone (right)

But, we’ve reached a point where a bit of a dilemma has been forced upon it. Josh was describing it as Gordian today, and I think the solution might be as radical as that.

The jGenerator library that we use to wrap images in .swfs has been acting more and more flakey over the last few weeks. Likely a result of increased load, we’re seeing what may be some kind of funky threading or deadlock issues deep in java.awt classes, which are relied upon for jGenerator to do its magic.

So, here’s our dilemma:

  1. Keep on using jGenerator, Hoping For The Best™ – we’d add some debug/babysitting code to detect the deadlock issue, and attempt to recover from it.
  2. Switch to the OpenLaszlo fork of jGenerator, hoping that they may have resolved whatever issues are plaguing it. That’s kind of a blind faith option, since we don’t know if/how the Laszlo folks have modified jGenerator in their fork.
  3. Take the sword to the knot, and dump our legacy tombstone/drm/swf-wrapping implementation. Build a new one, from scratch, waaaay past the 11th hour. We know what we would need to do in the authoring application to support a much more robust and flexible metadata-embedding-and-display strategy. The idea that we’ve come up with is actually more useful in many ways, as it can be applied to any media type – not just .swf-wrapped-images. We could easily create an XML-based lookup table that the flash templates would have to consult to gather metadata about assets in a presentation. That’s actually pretty straightforward to do in the authoring app, but every flash template file will need to be modified to teach it about the tombstone xml lookup…

I’m leaning quite strongly toward the third option. Let’s dump the bottomless pit of jGenerator, and focus on the future. There may be a short-term solution – drop tombstones altogether for awhile, taking some time to design the new solution without rushing it. We’d have to keep a snapshot of Pachyderm running, since Mavericks relies pretty heavily on tombstones – but even that needs to be fixed, since jGenerator barfs on characters like accents…

Basically, nobody’s comfortable with the status quo – it’s not stable, is unreliable, and can lock up publishing altogether. The Laszlo option might work, or it might not, leaving us no further ahead. The xml-lookup option is the most solid design, but would take more time than we have. Stupid dilemmas…

Update: Chatting with Josh last night, and he came up with a potentially simpler solution. No need for a lookup table, just have one xml file per media asset – change the filename from .jpg or .mov or whatever to .xml and you have the xml definition of the tombstone for that asset. No xml file, no tombstone. Actually, that could give us a bunch of flexibility – different types of tombstones for different sizes of an asset, for instance. This would be relatively trivial to implement in the authoring app, but we need to figure out what it would take to implement something like this in the flash templates…

The Pachyderm project uses jGenerator to wrap images in a flash .swf container for display in the final product. That process does a few things that are pretty handy:

  • Makes loading the images into flash easy – it’s just loading more flash…
  • Lets us embed metadata in a “tombstone” display field, much like the cards displayed in a museum. These tombstones travel with the asset, and can be displayed automagically wherever appropriate.
  • Provides a lightweight DRM – the images are useless outside of the finished Pachyderm presentation (unless you’re able to decompile flash, or take screenshots) – it’s not an overbearing DRM, just a way to make it easy to be honest.
Pachyderm Tombstones

(left) tombstone in “closed” state. Click the arrow widget dealie to expand it to view the full tombstone (right)

But, we’ve reached a point where a bit of a dilemma has been forced upon it. Josh was describing it as Gordian today, and I think the solution might be as radical as that.

The jGenerator library that we use to wrap images in .swfs has been acting more and more flakey over the last few weeks. Likely a result of increased load, we’re seeing what may be some kind of funky threading or deadlock issues deep in java.awt classes, which are relied upon for jGenerator to do its magic.

So, here’s our dilemma:

  1. Keep on using jGenerator, Hoping For The Best™ – we’d add some debug/babysitting code to detect the deadlock issue, and attempt to recover from it.
  2. Switch to the OpenLaszlo fork of jGenerator, hoping that they may have resolved whatever issues are plaguing it. That’s kind of a blind faith option, since we don’t know if/how the Laszlo folks have modified jGenerator in their fork.
  3. Take the sword to the knot, and dump our legacy tombstone/drm/swf-wrapping implementation. Build a new one, from scratch, waaaay past the 11th hour. We know what we would need to do in the authoring application to support a much more robust and flexible metadata-embedding-and-display strategy. The idea that we’ve come up with is actually more useful in many ways, as it can be applied to any media type – not just .swf-wrapped-images. We could easily create an XML-based lookup table that the flash templates would have to consult to gather metadata about assets in a presentation. That’s actually pretty straightforward to do in the authoring app, but every flash template file will need to be modified to teach it about the tombstone xml lookup…

I’m leaning quite strongly toward the third option. Let’s dump the bottomless pit of jGenerator, and focus on the future. There may be a short-term solution – drop tombstones altogether for awhile, taking some time to design the new solution without rushing it. We’d have to keep a snapshot of Pachyderm running, since Mavericks relies pretty heavily on tombstones – but even that needs to be fixed, since jGenerator barfs on characters like accents…

Basically, nobody’s comfortable with the status quo – it’s not stable, is unreliable, and can lock up publishing altogether. The Laszlo option might work, or it might not, leaving us no further ahead. The xml-lookup option is the most solid design, but would take more time than we have. Stupid dilemmas…

Update: Chatting with Josh last night, and he came up with a potentially simpler solution. No need for a lookup table, just have one xml file per media asset – change the filename from .jpg or .mov or whatever to .xml and you have the xml definition of the tombstone for that asset. No xml file, no tombstone. Actually, that could give us a bunch of flexibility – different types of tombstones for different sizes of an asset, for instance. This would be relatively trivial to implement in the authoring app, but we need to figure out what it would take to implement something like this in the flash templates…

Author D'Arcy NormanPosted on September 22, 2005Categories UncategorizedTags jgenerator, metadata, pachyderm, webobjects4 Comments on Pachyderm Asset Transformation Dilemma

Exploring Interestingness on Flickr

I’ve never gone through the Explore section on Flickr before. I just checked it out, and holy crap! Interestingness everywhere! There are some absolutely amazing photos on Flickr!

What they’ve done is come up with a great way to mine user-generated data to provide a view onto their database that is much richer than any taxonomy-based scheme would allow – “interestingness” is defined on the fly by the users of the system, and the parameters change constantly.

There is a calendar view, so you can see “interesting” photos from any date you like, or you can use the “last 24 hours” view to see recent stuff.

Some of the better shots from the last 24 hours:
Canal Street, Fridaysome sunsetHurricane Katrina NewbornBurj Al ArabFanning outOutside Bodie's Church

And some from June (admittedly, I was looking to see if some of my shots were in the “interesting” list, but couldn’t find any)
underneathNambikwara ChiefOngoing StreamYuanAn African SunsetWhite knuckle thrill rideLoaded downMorning WalkSpritual StairsGiraffes-sunsetA windowthe old lady and the street

I could go on, and on, and on…

This whole making-sense-of-stuff-without-rigidly-structured-metadata thing is kinda fun!

It would be even cooler if the Flickr Gods had provided an RSS feed for the “10 most interesting photos in the last 24 hours”…

Update: Steeev has set up a hacked RSS feed for the “interestingness” stream! Thanks, Steeev!

I’ve never gone through the Explore section on Flickr before. I just checked it out, and holy crap! Interestingness everywhere! There are some absolutely amazing photos on Flickr!

What they’ve done is come up with a great way to mine user-generated data to provide a view onto their database that is much richer than any taxonomy-based scheme would allow – “interestingness” is defined on the fly by the users of the system, and the parameters change constantly.

There is a calendar view, so you can see “interesting” photos from any date you like, or you can use the “last 24 hours” view to see recent stuff.

Some of the better shots from the last 24 hours:
Canal Street, Fridaysome sunsetHurricane Katrina NewbornBurj Al ArabFanning outOutside Bodie's Church

And some from June (admittedly, I was looking to see if some of my shots were in the “interesting” list, but couldn’t find any)
underneathNambikwara ChiefOngoing StreamYuanAn African SunsetWhite knuckle thrill rideLoaded downMorning WalkSpritual StairsGiraffes-sunsetA windowthe old lady and the street

I could go on, and on, and on…

This whole making-sense-of-stuff-without-rigidly-structured-metadata thing is kinda fun!

It would be even cooler if the Flickr Gods had provided an RSS feed for the “10 most interesting photos in the last 24 hours”…

Update: Steeev has set up a hacked RSS feed for the “interestingness” stream! Thanks, Steeev!

Author D'Arcy NormanPosted on September 3, 2005Categories UncategorizedTags Flickr, metadata, software2 Comments on Exploring Interestingness on Flickr

Folksonomise your files with Automator

I’ve tried playing around with the Folksonomise your files with Automator tip from MacDevCenter.com – I really like the idea of it.

If you follow the tip, you get a handy item in the Finder’s contextual menu that lets you bring up a text field to enter tags for the file. Like you do with del.icio.us, or Flickr, or iPhoto. Then, Spotlight lets you find them easily. Or, you can create Smart Folders in the Finder for tags, and have these files be found (even if the tag isn’t contained within the content of the file – a case that would normally make Spotlight overlook it).

It’s close, but not perfect. Ideally, I’d need a key combo so I could just select a file, hit, say F5 or something, and enter the tags. Having to right-click the file breaks the flow, and isn’t that much simpler than just calling Get Info, and toggling the “Spotlight Comments” text field to enter stuff (which is essentially what is being quasi-automated here).

Some pretty interesting potential – a folksonomy-native filesystem on your desktop. Rock on.

I’ve tried playing around with the Folksonomise your files with Automator tip from MacDevCenter.com – I really like the idea of it.

If you follow the tip, you get a handy item in the Finder’s contextual menu that lets you bring up a text field to enter tags for the file. Like you do with del.icio.us, or Flickr, or iPhoto. Then, Spotlight lets you find them easily. Or, you can create Smart Folders in the Finder for tags, and have these files be found (even if the tag isn’t contained within the content of the file – a case that would normally make Spotlight overlook it).

It’s close, but not perfect. Ideally, I’d need a key combo so I could just select a file, hit, say F5 or something, and enter the tags. Having to right-click the file breaks the flow, and isn’t that much simpler than just calling Get Info, and toggling the “Spotlight Comments” text field to enter stuff (which is essentially what is being quasi-automated here).

Some pretty interesting potential – a folksonomy-native filesystem on your desktop. Rock on.

Author D'Arcy NormanPosted on May 14, 2005Categories UncategorizedTags 10.4-tiger, metadata

Folksonomy-enabled plugin for database tagging

Not sure if/how I’ll use this, but FreeTag sure sounds cool. It’s a PHP/MySQL magic widget that lets you add folksonomies and tags onto existing MySQL databases…

[via ::schwagbag::: Folksonomy-enabled plugin for database tagging]

Of course, now that tag clouds (folksonomies) are the new mullet…

Not sure if/how I’ll use this, but FreeTag sure sounds cool. It’s a PHP/MySQL magic widget that lets you add folksonomies and tags onto existing MySQL databases…

[via ::schwagbag::: Folksonomy-enabled plugin for database tagging]

Of course, now that tag clouds (folksonomies) are the new mullet…

Author D'Arcy NormanPosted on April 21, 2005Categories UncategorizedTags metadata

flickrGraph: Mapping relationships in Flickr

OK. This is insanely cool. Check out the flickrGraph relationship map for my Flickr account. Dynamically generated flash concept map, based on the relationship data stored by Flickr.

Wow.

It’s also done really nicely (try dragging a person’s icon around…) Kinda like ThinkMap meets FOAF meets Flickr…

Fun things that you can do with metadata (without realizing that you’re playing with metadata).

UPDATE: Wouldn’t it be awesome if Technorati was able to display something like this for the link cosmos for a given URL?

OK. This is insanely cool. Check out the flickrGraph relationship map for my Flickr account. Dynamically generated flash concept map, based on the relationship data stored by Flickr.

Wow.

It’s also done really nicely (try dragging a person’s icon around…) Kinda like ThinkMap meets FOAF meets Flickr…

Fun things that you can do with metadata (without realizing that you’re playing with metadata).

UPDATE: Wouldn’t it be awesome if Technorati was able to display something like this for the link cosmos for a given URL?

Author D'Arcy NormanPosted on February 10, 2005Categories UncategorizedTags metadata, software2 Comments on flickrGraph: Mapping relationships in Flickr

Posts navigation

Page 1 Page 2 … Page 5 Next page

Popular Content

  • Archiving a (WordPress) website with wget
  • Stopping Spamblog Registration in WordPress MultiUser
  • random photography assignment generator
  • Battlestar Galactica Ringtones for iPhone
  • on focal lengths (or zooming)
  • where the wild (spammy) things are
  • comments on facebook
  • nature lovers
  • Google Maps + Keyhole Satellite Imagery?
  • Archives

Recent Comments

  • Kalob Taulien on where the wild (spammy) things are
  • D'Arcy Norman on where the wild (spammy) things are
  • Kalob Taulien on where the wild (spammy) things are
  • D'Arcy Norman on rambling thoughts on blogging and silos
  • Stephen Downes on rambling thoughts on blogging and silos

Subscribe via RSS

  • RSS - Posts
  • RSS - Comments

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

  • About
    • Colophon
    • Presentations
    • Publications
  • Archives
    • Activity
    • Asides
    • Categories
      • aside
      • coursework
      • ephemera
      • fun
      • general
      • personal
      • photography
      • work
    • Coursework
    • Shared Items
  • Contact
  • PhD Notes
  • Photos
    • Gallery
  • Projects
    • Taylor Institute for Teaching and Learning
    • PhD notes
    • Reclaiming Educational Technology
    • MSc Thesis
    • WordPress
      • Akismet Credit Inserter
      • Blog Members Directory Shortcode
      • Ephemerator
      • Important Links Widget
      • Post Revision Display
      • Tag Cloud Shortcode
      • User Info Login Shortcode
D’Arcy Norman dot net Proudly powered by WordPress