hosting – D’Arcy Norman dot net https://darcynorman.net no more band-aids Wed, 24 Aug 2016 23:21:25 +0000 en-US hourly 1 https://darcynorman.net/wp-content/uploads/2015/04/crankforpeace3-552f33a1v1_site_icon-32x32.png hosting – D’Arcy Norman dot net https://darcynorman.net 32 32 1067019 reclaiming ephemeral media https://darcynorman.net/2011/05/27/reclaiming-ephemeral-media/ https://darcynorman.net/2011/05/27/reclaiming-ephemeral-media/#comments Sat, 28 May 2011 00:13:53 +0000 http://www.darcynorman.net/?p=4909 Continue reading "reclaiming ephemeral media"]]> Following Boone’s lead, I’m going to be working to reclaim as much of my online activity as possible. I set up a separate WordPress site to handle ephemeral media that are usually posted to Twitter, so that things like the Twitpic licensing brouhaha don’t apply. Because it’s just a blog, it can handle anything – entire galleries of images, audio, video, or any combination. I can also geotag posts, and add plugins to enable timeline and calendar views.

As far as Reclaiming goes, this was one of the simplest things to do. But the process wasn’t simple at all. It took me about half an hour to get going, from setting up a new subdomain, to installing WordPress (I need to migrate my sites to Multisite, but that’s for another time) and adding a MySQL database for it. Tweaking WordPress settings. Installing Twitter Tools plugin to autobroadcast new posts to my Twitter account (good god is Twitter cumbersome to configure stuff for – auth tokens, secret keys, etc… insanity). Then, realizing I’d forgotten to set the timezone of the new media blog, then adjusting that, and realizing that adjustment threw a wrench into the Twitter autoposting system (hopefully only until the 6-hour delta is caught up).

This is not something that 99.99% of people will do. But, those 99.99% of people need to be able to Reclaim their stuff (including low-value ephemeral media otherwise dumped to Twitpic/YFrog/etc…) This is why services like that are so successful, and why third party hosting is so tempting. It’s several orders of magnitude easier to just use a third party service, rather than rolling up your sleeves and hosting stuff yourself. And that leaves out any social layer (which was implemented via Twitter autoposting here, but that may not be appropriate for other services – and I’m not comfortable with Twitter being even more entrenched as the social glue platform).

One more item for a magical Reclaim server appliance…

Update: I just realized that one of the photos I used to test, posted from the iOS WordPress app, wound up going to my main blog rather than my ephemeral media blog. Totally operator error – I selected the wrong blog as destination – but points out how things get more complicated when doing stuff yourself…

]]>
https://darcynorman.net/2011/05/27/reclaiming-ephemeral-media/feed/ 4 4909
silos of people https://darcynorman.net/2011/05/24/silos-of-people/ https://darcynorman.net/2011/05/24/silos-of-people/#comments Wed, 25 May 2011 04:19:37 +0000 http://www.darcynorman.net/?p=4882 Continue reading "silos of people"]]> I’ve been experimenting with bits of software to take control of my online content. The functionality is all there for me to run my own stuff, without feeding corporate silos. I can post text, images, photos, videos. I can store files and access them from anywhere. Without having to hand my bits over to any company.

Except when I want to play with others. To do that, I still need to wade into the silos. Flickr isn’t about photo storage or hosting – it’s about seeing what my friends and family are photographing. Twitter isn’t about posting 140char updates – it’s about seeing the flow of activity from the people I care about.

Although I can reproduce the content-centric functionality for posting and sharing content online, I can only do it in an extremely antisocial way. I do it by myself, on my own. Away from others. Alone.

I’d nuked my Facebook account long ago. I was happy to not be feeding Zucker’s beast. Until I realized that (nearly) everyone I cared about was there – people who would never post to a blog, or maintain a photo site, or anything that’s content-centric and close to the metal. They just want to hang out and share stuff with people they care about. So I sucked it up and recreated a Facebook account. I’m torn – on the one hand, it felt like a failure. On the other hand, it feels like a great way to keep up with what friends and family are doing – especially since many of them would never venture out of the corporate silo to post things on their own.

But the feeling of failure is pretty strong. I think we’re failing as a culture, when the only effective way to connect with people is to hand our social (online- and offline) network graphs to a corporation to monetize at will. Our social connections are far too important to trust them to Google, Facebook, Twitter, or the next big shiny thing. We need to step up, somehow, and take control back. I have no idea how that could happen. There have been many false starts1234, but they’ve been so highly technical that the people that really need them wouldn’t have even known that options existed (and so they didn’t, really). That’s why corporate silos have been so successful – they make the plumbing of online social connection disappear as much as possible.

We need a human-scale, non-technical way for individuals to manage their connections with other individuals, without having to hand control over those connections to any company to mine and monetize. It’s not about content – it’s about managing connections to people, and to the things they are doing.

Update: As usual, Boone Gorges is already thinking about this, in far greater depth than I managed. Awesome. I’ll be thinking through how I should Reclaim. Sign me up.

  1. OpenID – own your online identity!
  2. Diaspora – Your own personal Facebook
  3. FOAF – xml describing your identity and social graph
  4. probably a bunch more that I just can’t think of right now…
]]>
https://darcynorman.net/2011/05/24/silos-of-people/feed/ 14 4882
on breaking away from hosted silos https://darcynorman.net/2010/12/29/on-breaking-away-from-hosted-silos/ https://darcynorman.net/2010/12/29/on-breaking-away-from-hosted-silos/#comments Thu, 30 Dec 2010 05:28:49 +0000 http://www.darcynorman.net/?p=4612 Continue reading "on breaking away from hosted silos"]]> This is a long, rambling, incomplete blog post that’s been rattling around in my head for a week. I decided to try to just put something in writing to see if I could make it less unclear. Caveat emptor.

If people are to manage their own content, forming their digital identities, they need a way to host software and content that doesn’t require obscure and detailed technical knowledge.

Us early adopters are not normal. We’ve been so close to technologies, for so long, that we forget what it’s like to be new to the stuff. Or not to live and breathe tech every day. Most people are not like us. They don’t know what HTTP is. It’s just some silly letters before the address of a website. They don’t know what DNS is. They don’t know what FTP is. They don’t know what SSH is. Or MySQL. Or PHP. Or Perl. etc…

And they shouldn’t have to know these things in order to be full and meaningful participants in online discourse.

Currently, we have a geeky elitism. The early adopting technophile geeks are aware of how software and systems are designed, and may be able to design, host, or manage their own software and content. And everybody else, who don’t know, and don’t care, about the technical mumbojumbo that geeks seem to like to talk about incessantly. Geeks. Jeez.

NewImage.jpgI think there are FAR more people like my Dad, than there are like me. My Dad is 75, and has been using computers for as long as I have. He brought home a Vic=20, followed by a C=64, C=128, an Amiga, and now he’s on his second iMac. He’s not scared of computers, or of technology in general. But he doesn’t live it. He plays, but he sees it as a way to do stuff, not as tech for the sake of tech. At 72, he found Skype, on his own, and set up an account to talk to my older brother who lives on the other side of the globe. He can get stuff done, but gets stuck. Like, a lot.

So, when I’m thinking about “breaking away from hosted silos” I try to keep my Dad in mind. Is this something he cares about? Is it something he could do? Would he?

Why would people want to manage their own content? Third party silos are convenient, but temperamental and transient. It’s so easy to share content on a hosted (and free) service, without having to think about setting anything up or configuring anything, or running backups, or registering domain names, or any of an unlimited list of details required to host software online.

But these services exist to monetize you, your relationships, and your content. And they may change in ways you don’t appreciate, or simply disappear – leaving you suddenly without a potentially substantial component of your online life.

Data portability – the ability to export all (or even any) of your content, to be imported into some other application – gives some sense of security or insurance. But even that requires some technical background that many people don’t have and don’t want (and shouldn’t need).

When I yanked my Delicious.com bookmarks through the export process – through a hidden API url, in a cryptic XML format – I had to futz around for awhile until I got the export file. Then, it was too big to import into something else directly, so I had to futz around with the raw XML to slice it into 3 separate files. I finally got my bookmarks migrated into a self-hosted instance of Scuttle. It wasn’t exactly rocket surgery, but it wasn’t a trivial task, either. These are things that my Dad would never be able to do. Nor should he have to.

And, even though my bookmark data was all there, my network – the relationships I’d built over the years – was gone.

Right now, the best way to manage your own domain is to set up a shared hosting account on GoDaddy or Dreamhost or any of a long list of others. From there, you likely (but not necessarily) have access to the CPanel (or maybe Fantastico) interface for managing the server space – domain names, databases, directories, etc… Gardner Campbell describes this as a good starting point for students to manage their own stuff, as a 21st century digital literacy skill. I disagree – I don’t think it’s easy enough. It’s easier than managing your own server, but it’s far more complicated than most people would be comfortable with. My Dad calls me regularly for help with finding things on his Mac. I can’t imagine how many calls I’d get if he tried setting up a web hosting account. He’d have to move in with us, and that’s not the best solution to getting Dad to host his own stuff.

Actually, Dad’s iMac already has everything he’d need, in order to host his own stuff. It comes stock with Apache2, MySQL, PHP, and all kinds of other goodies. All he’d have to do is sign up for a DynDNS.org account, light up the built-in server apps, and install whatever web applications he wants. But Dad isn’t about to do that. Even that single-sentence description of the process would make his eyes glaze over.

So, what’s the alternative? If most people (including my Dad) would never set up a web hosting account, or run their own webserver, how are they going to host their own content?

The closest thing I’ve seen as an ideal alternative is the Opera Unite project. Built into the Opera browser, is a server that can install and run a number of applications – things like webservers, file sharing, whiteboards, etc… It only takes a single click to install an application. Then, it runs on your own computer, storing the data on your own hard drive. Unite takes care of mapping a domain name to your running copy of the Opera browser, and sends people directly to your computer instead of some server Out There.

Here’s the Applications menu running on my laptop, letting people who visit my operaunite.com domain choose what they want to see.

Opera Unite Applications Menu

The stock Unite apps are decent enough, but don’t really replace what people are doing online. There is a blog app, but it sucks terribly. The bookmarks app is no Delicious (nor Scuttle). So, the apps aren’t a huge draw.

But the model is great. Software that runs on your own computer that lets you control your own content. It handles an automatic domain name, for those that don’t have one (or don’t want to set one up). But, it also works with regular domain name, as long as it’s configured to point to your home IP address. Unite even starts to address the social network layer – letting you connect with friends through the Unite service and see their activity streams.

Opera Unite is cool, but it’s not the killer platform for hosting your own stuff. It’s cross platform, but it requires people to switch browsers in order to run a server. They should be decoupled. A separate, ideally cross platform, server platform is required for this to really take off.

We’ve got similar models in non-server software. The App Store shows how easy it is to find and install apps – something, again, that many people just don’t do on their desktop computers. If they do install something, it’s because they’ve been asked or told to, not because they felt comfortable trying out a new app, experimenting and exploring. The app store changes that. It’s trivial to try out a new app, without worrying about installing it, or breaking anything.

So, the characteristics of this mythical standalone self-hosting platform:

  • lightweight – tomcat need not apply
  • cross-platform (Mac, Windows, maybe Ubuntu?)
  • server “app store” analog for easy one-click installs
  • simple domain name setup (default to a computer.username.ihostmyownstuff.net but allow/encourage custom domains)
  • simple interface for managing apps – add/delete/start/stop/config/etc… without having to edit files
  • simple interface for managing app data – backup/export/import/config/etc… without having to edit files

What about the apps? Traditional php applications currently require too much geek stuff to properly manage – you should be editing files, auditing files for plugins/themes before installing to verify that they don’t contain evil stuff, etc… My Dad won’t be doing that. He literally needs a one-click install.

So, if it’s really one-click, what does the app look like? Could they be some form of native code, rather than bundles of interpreted php etc…? Also, a stand-alone desktop app may not require MySQL or PHP or any of the other common parts of current traditional web apps. What if these apps were compiled native code, using some form of stand-alone NoSQL data storage?

One of the nice things about the use of php for web apps is that they are easily readable and modifiable – anyone with a text editor can hack on the code, tweak it, or fix things. But, how many people actually do that, in comparison to the number of potential users of the software? Is “anyone can edit” worth the cost of “everyone has to manage”? WordPress has come pretty close to trivial administration – the app has a one-click updater, and plugins and themes update almost automatically from within the Dashboard. But, it needs to get installed and configured in the first place. And there are lots of other apps out there that don’t offer anywhere near the level of interface polish as WordPress.

Dad isn’t going to install a copy of Gallery2. And he certainly isn’t going to hack a theme for it.

But, he could click “install photo gallery” from the mythical self-hosting app directory. In my head, it’s as simple as browsing the App Store on an iPhone, and clicking on an app to install it. Done. No geeky stuff required.

Of course, this would only handle the app/content side of things. What about the magic of the network of social connections? There are a few models. Google’s OpenSocial project may be a solution. Or, there could be central connection hubs – similar to GameCenter for iPhone games – where people register with the service, and all of their apps send notifications to the service (or, alternatively, let friends know where to get notifications sent directly).

And, all of this is based on the (likely false) assumption that people really give a crap about running their own stuff and owning their software and data rather than continuing to feed their activity streams into “free” hosted services so others can monetize them by inserting ads or reselling data and relationships.

]]>
https://darcynorman.net/2010/12/29/on-breaking-away-from-hosted-silos/feed/ 11 4612
Shared items from my feed reader https://darcynorman.net/2009/09/10/shared-items-from-my-feed-reader/ https://darcynorman.net/2009/09/10/shared-items-from-my-feed-reader/#comments Thu, 10 Sep 2009 18:16:49 +0000 http://www.darcynorman.net/?p=3288 Continue reading "Shared items from my feed reader"]]> One of the things I was missing when I switched from Google Reader to Fever˚ was a way to share items from my subscriptions. Fever˚ didn’t have any way to generate a feed of things I saved, so it was kind of a separate silo. But, the most recent version of Fever˚ includes a cool new feature to share my Saved items in an RSS feed. Easy peasy.

Here’s an embedded view of the last 30 saved items, thanks to the magical wondrousness of Feed2JS: (it’ll probably bork in the feed, though. irony.)

There are lots of other features that got added in the last update, including integration with Twitter and Instapaper. Fever˚ just keeps getting better and better…

]]>
https://darcynorman.net/2009/09/10/shared-items-from-my-feed-reader/feed/ 5 3288
on context and identity https://darcynorman.net/2008/10/21/on-context-and-identity/ https://darcynorman.net/2008/10/21/on-context-and-identity/#comments Tue, 21 Oct 2008 17:42:32 +0000 http://www.darcynorman.net/?p=2389 Continue reading "on context and identity"]]> I had a discussion with King Chung Huang and Paul Pival this morning, about one of King’s current research projects. He’s working on the topic of context and identity – what it would mean from both institutional and individual perspectives, if our digital identities and contexts were pulled out of the silos of Blackboard, email, and other isolated and closed systems. What would it mean if every person, group, and place has a URL, which is aware of contexts (institutional, academic, geographical, temporal, etc…) and is also able to gather and provide lists of relevant resources.

A Person would have what is essentially a profile (name, role, contact info, interests, courses, websites, etc…), a Group would describe its type (department, faculty, course, session, club, etc…) as well as lists of relevant bits of info (uses a wiki, has a Blackboard course, meets at this location at this time, has these members, etc…). And Places would describe physical locations, knowing which resources are available, where they are, which Persons and Groups are interested in the Place, as well as scheduling information, etc… (hmm… do we need a fourth primitive type of Time?)

At first blush, it felt like a “portal” problem. Set up a personal Pageflakes or Netvibes page, dropping in some relevant widgets and links. Everyone can customize their own page, and a directory could be created to help discover people, groups, and places.

But that approach loses any real meaning of the contexts. It’s just a dumb content display utility, without being aware of the meaning of the contexts of the content, or of the relationships between people, groups and places.

We talked for awhile, and came to the realization that there is a missing fundamental concept. One that describes the identity and context, and ties the relevant bits of salient info together in a way that can then be used to build novel applications.

Currently, a prof sets up a Blackboard course. They add content to the course. They add Links to various bits. But none of this stuff really knows the context – just that it’s some text that’s been pasted into a container within Blackboard. A prof could spend a lot of time and effort building up a course site in Blackboard, only to kill it at the end of the semester. (sure, it could be cloned, but again that’s context-unaware).

What if the course was just a Group, set up with its own identity and context, and aware of various bits of information. Is Called Mythical Course 301. Has Course ID of MYTHCRSE301. Has Professor… Has TAs… Has Blackboard Course… Uses Wiki at… Podcasts available at… Meets MWF 1000-1050 at ST148…

The idea that Paul came up with is that this is related to the mythical EduGlu concept, but as a necessary first step that is currently missing. Right now, there would be much manual labour to set up an EduGlu service to aggregate activity that happens as part of the practice of teaching and learning. What if we could take advantage of the contexts of Person, Group, and Place to automate that process? We could pull sets of RSS feeds into the aggregator, apply some processing, and export different formats for use in different contexts. Map views. Calendar views. Timeline views. Analysis of individual and group contributions. Interaction analysis. etc…

But, is there some tool, application or platform that is currently able to handle this abstracted concept of context – of Person, Group and Place – that can be used to create a flexible *cough*portal*ahem* to manage and display the torrents of centralized and decentralized information?

]]>
https://darcynorman.net/2008/10/21/on-context-and-identity/feed/ 5 2389
Moved to CanadianWebHosting.com https://darcynorman.net/2008/02/10/moved-to-canadianwebhostingcom/ https://darcynorman.net/2008/02/10/moved-to-canadianwebhostingcom/#comments Sun, 10 Feb 2008 15:53:18 +0000 http://www.darcynorman.net/2008/02/10/moved-to-canadianwebhostingcom/ If you can read this, the move she is done. I’ll write more on that later, but hopefully the performance problems this site has been having for almost 2 years will be a thing of the past *touch ethernet*

]]>
https://darcynorman.net/2008/02/10/moved-to-canadianwebhostingcom/feed/ 5 1817
Open Education Course: week 2 reading https://darcynorman.net/2007/09/08/open-education-course-week-2-reading/ https://darcynorman.net/2007/09/08/open-education-course-week-2-reading/#comments Sun, 09 Sep 2007 03:25:05 +0000 http://www.darcynorman.net/2007/09/08/open-education-course-week-2-reading/ Continue reading "Open Education Course: week 2 reading"]]> Notes for week 2 of David Wiley’s Intro to Open Education course at Utah State University, on Giving Knowledge for Free: The Emergence of Open Educational Resources – Organization for Economic Cooperation and Development, Centre for Educational Research and Innovation.

I think I’m definitely falling down on the academic rigour of my responses – I should be providing a much deeper response, rather than just barfing out some thoughts and questions. I’ll try to pick it up for week 3.

There is a very strong overlap between “Open Educational Resources” and “Learning Objects” – so, what is the difference? Why should anyone care about OER, when LO failed? LO had a strong focus on metadata, on machine-mediated interoperability. OER is focused more on the content and the license. There are no technical standards to define an OER, merely the fact that someone created an educational resource (however that is defined) and decided to release it under an open license (typically, CreativeCommons). Because interoperability is not the primary goal, the content creators are primarily solving their immediate needs for content, and secondarily offering the content for reuse. Learning Objects began and ended with metadata, and as a result never really got much traction.

In my personal experience, I share my content freely under a simple CreativeCommons Attribution license, not out of some sense of altruism, but because it doesn’t cost me anything to do so – either in time or resources. I create and publish content primarily for my own use, applying the CC By: license, and if someone else can benefit, then so be it. But sharing is not the primary goal of the activities of creating and publishing content. As a result, I’ve had photographs on magazine covers, published in books, used in board games, and in more websites and reports than I can track. All of that reuse was secondary to my initial purpose for creating and publishing the content – even if it has become more important than the original use. An argument could be made that I have lost potential revenue by releasing content for free use (even in a commercial context such as a book or magazine) but if I had locked the content down, that reuse would not have happened anyway. At the very least, sharing costs me nothing (either financially or in time) because the production of this content would have occurred even if the content was not shared. Further, I have had direct requests for separate commercial licensing of materials outside the bounds of CC By: (specifically for projects that couldn’t provide proper attribution) and have granted these licenses as needed – the CreativeCommons license is non-exclusive, providing much flexibility.

From an institutional perspective, I encourage open sharing of academic content wherever I can, for two reasons. First, it’s the right thing to do in order to disseminate the academic content as widely as possible. Second, from an economic point of view, in many cases the development of content has already been paid for by members of the general public – either through taxes which provide governmental financial support for the institution, or by contributions from other governmental sources. As a result, the content is indirectly paid for by the taxpayers, meaning they have a right to benefit from the process.

With this in mind, I think it is important to find processes of producing content whereby it is easier and more efficient to create “open” content than locked or proprietary content. The OpenContentDIY project with Jim is an example of this – using a hosted weblog/CMS application to produce content in a way that makes it easier to do it in the “Open” than not.

OERs and digital content in general is important because of the low cost of distribution – not free, but about as close as possible. There is also a strong environmental incentive – no forests are pulped to generate .PDF documents, and no oil is pumped to transport TCP/IP packets through the fiber optic backbone of the Internet. Also, by selecting an open content format such as HTML, XHTML, XML, or even just a well documented and available file format such as PDF, JPG, PNG or RTF, content is available for use on a wide variety of platforms, and portions of the content is available for reuse in other applications.

One trend that I find very impressive and promising is the growing acceptance of professors to have their students to “go public” (as John Willinsky advocates). I have talked with a professor at a high enrollment course at my university, who plans on having over 1000 undergraduate students collaborate to create open online resources to describe and discuss various topics. This is a strategy that would be impossible without digital content distribution, and would be difficult without open content licenses such as CreativeCommons. At the least, future cohorts of students will have a body of work to use as a starting point for their own projects. Ideally, future cohorts of students will be able to refine and extend the existing body of content, working to evolve the materials over time.

I am unconvinced in the need for repositories and referatories. As long as an OER has been produced using a suitable file format, and has a machine-readable license deed applied to it, tools such as the CreativeCommons Search utility should suffice. Individuals and organizations would be free to publish their content in any location visible to the open Web, and allow the existing infrastructure of Google, Yahoo, and the like to spider and index their resources for all to find and use. There is no need for creating walled gardens or silos of open educational content in the form of repositories or referatories.

I was surprised to see in the assay of OER projects, that they all seem to originate in “have” countries. The first world countries and institutions, releasing content as OER. That is likely to be expected, since these institutions will be more active in content production in general.

Question: Are third world countries seen purely as “consumers” of OER shared by benevolent first world nations?

I would hope to see significant OER production projects originating in third world nations, to foster culturally relevant materials and counter the “cultural imperialism” concerns.

One problem with a rise in available OER materials is the lack of “certification” in the content. There is no content review board, or process to verify accuracy and validity of the content. Conventional content distribution through printed books placed a burden on the publishers and editors, whose names appeared on the book. An OER could be created and published by an individual, without any accreditation or attribution.

Question: How best to determine accuracy and validity? Perhaps this is an opportunity for the repositories and referatories? Services like Merlot provide some of this functionality already, and there are opportunities for other localized services to review and “approve” available OER materials for use in various contexts.

]]>
https://darcynorman.net/2007/09/08/open-education-course-week-2-reading/feed/ 1 1737
Script for running Cron on all sites in a shared Drupal instance https://darcynorman.net/2007/01/01/script-for-running-cron-on-all-sites-in-a-shared-drupal-instance/ https://darcynorman.net/2007/01/01/script-for-running-cron-on-all-sites-in-a-shared-drupal-instance/#comments Mon, 30 Nov -0001 00:00:00 +0000 http://1689411003 #!/usr/local/bin/ruby # Drupal multisite hosting auto cron.php runner # Initial draft version by D'Arcy Norman dnorman@darcynorman.net # URL goes here # Idea and some code from a handy page by (some unidentified guy) at http://whytheluckystiff.net/articles/wearingRubySlippersToWork.html require 'net/http' # this script assumes that $base_url has been properly set in each site's settings.php file. # further, it assumes that it is at the START of a line, with spacing as follows: # $base_url = 'http://mywonderfuldrupalserver.com/site'; # also further, it assumes there is no comment before nor after the content of that line. # customize this variable to point to your Drupal directory drupalsitesdir = '/usr/www/drupal' # no trailing slash Dir[drupalsitesdir + '/sites/**/*.php'].each do |path| File.open(path) do |f| f.grep( /^\$base_url = / ) do |line| line = line.strip(); baseurl = line.gsub('$base_url = \'', '') baseurl = baseurl.gsub('\';', '') baseurl = baseurl.gsub(' // NO trailing slash!', '') if !baseurl.empty? cronurl = baseurl + "/cron.php" puts cronurl if !cronurl.empty? url = URI.parse(cronurl) req = Net::HTTP::Get.new(url.path) res = Net::HTTP.start(url.host, url.port) {|http|http.request(req)} puts res.body end end end end end No warranty, no guarantee. It works on my servers, and on my PowerBook. Some caveats:
  • It requires a version of Ruby more recent than what ships on MacOSX 10.3 server. Easy enough to update, following the Ruby on Rails installation instructions.
  • It requires $base_url to be set in the settings.php file for each site you want to run cron.php on automatically.
  • It requires one trivial edit to the script, telling it where Drupal lives on your machine. I might take a look at parameterizing this so it could be run more flexibily.
  • It requires cron (or something similar) to trigger the script on a regular basis.
]]>
After realizing that the sympal_scripts were silently failing to properly call cron.php on sites served from subdirectories on a shared Drupal multisite instance, I rolled up my sleeves to build a script that actually worked. What I’ve come up with works, but is likely not the cleanest or most efficient way of doing things. But it works. Which is better than the solution I had earlier today.

I also took the chance to get more familiar with Ruby. I could have come up with a shell script solution, but I wanted the flexibility to more easily extend the script as needed. And I wanted the chance to play with Ruby in a non-Hello-World scenario.

Here’s the code:

#!/usr/local/bin/ruby

# Drupal multisite hosting auto cron.php runner
# Initial draft version by D'Arcy Norman dnorman@darcynorman.net
# URL goes here
# Idea and some code from a handy page by (some unidentified guy) at http://whytheluckystiff.net/articles/wearingRubySlippersToWork.html

require 'net/http'

# this script assumes that $base_url has been properly set in each site's settings.php file.
# further, it assumes that it is at the START of a line, with spacing as follows:
# $base_url = 'http://mywonderfuldrupalserver.com/site';
# also further, it assumes there is no comment before nor after the content of that line.


# customize this variable to point to your Drupal directory
drupalsitesdir = '/usr/www/drupal' # no trailing slash

Dir[drupalsitesdir + '/sites/**/*.php'].each do |path|
  File.open(path) do |f|
    f.grep( /^\$base_url = / ) do |line|
      line = line.strip();
      baseurl = line.gsub('$base_url = \'', '')
      baseurl = baseurl.gsub('\';', '')
      baseurl = baseurl.gsub('  // NO trailing slash!', '')

      if !baseurl.empty?
        cronurl = baseurl + "/cron.php"
        puts cronurl
 
        if !cronurl.empty?
          url = URI.parse(cronurl)
          req = Net::HTTP::Get.new(url.path)
          res = Net::HTTP.start(url.host, url.port) {|http|http.request(req)}
          puts res.body
        end
      end
    end
  end
end

No warranty, no guarantee. It works on my servers, and on my PowerBook.

Some caveats:

  • It requires a version of Ruby more recent than what ships on MacOSX 10.3 server. Easy enough to update, following the Ruby on Rails installation instructions.
  • It requires $base_url to be set in the settings.php file for each site you want to run cron.php on automatically.
  • It requires one trivial edit to the script, telling it where Drupal lives on your machine. I might take a look at parameterizing this so it could be run more flexibily.
  • It requires cron (or something similar) to trigger the script on a regular basis.
]]>
https://darcynorman.net/2007/01/01/script-for-running-cron-on-all-sites-in-a-shared-drupal-instance/feed/ 4 1550
Trouble with cron.php in a Drupal multisite configuration https://darcynorman.net/2006/12/29/trouble-with-cron-php-in-a-drupal-multisite-configuration/ https://darcynorman.net/2006/12/29/trouble-with-cron-php-in-a-drupal-multisite-configuration/#comments Mon, 30 Nov -0001 00:00:00 +0000 http://730089426 sites/sitename directory. I'd been using sympal_scripts to automatically run Drupal's cron.php script for each site in order to keep search indexes up to date and run other routine maintenance functions as expected. It's easy enough to drop a curl http://server/site/curl.php into a crontab, but as you start adding sites to the server, it becomes unwieldy to maintain a current crontab of sites to cron. Sympal_scripts attempts to read through the scripts directory, poking through each site and loading Drupal for each one in order to fire off the appropriate cron.php. It's been adding records to the Drupal watchdog table, so I expected it to be working just fine. Except it hasn't actually been running cron.php - it's been failing silently. Looks like there's something funky in the way Drupal refers to the $base_url variable for the site. It's set in each settings.php file, so it should be as simple as returning the content of a string variable. But it's borking, and returning the name of the directory containing the site's settings.php file. Say I've got a server, myserver.com, with a bunch of sites all configured to be served as subdirectories of that server's main website, such as myserver.com/site1 and myserver.com/site2 Each site has a respective directory within the Drupal installation's sites directory, such as myserver.com.site1 and myserver.com.site2 (the / are converted to . for use in the directory name because / would be invalid in a directory or filename). When Drupal is initialized by sympal_scripts/cron.php, it's getting $base_url values of http://myserver.com.site1 and http://myserver.com.site2. So, when it goes to fire off the cron task, it's using urls like: http://myserver.com.site1/cron.php It works fine on sites configured to run on their own domain, as the domain matches the site directory. WTF? The http:// shows that it's reading the value within each settings.php file (or does it?), but why is it retaining the .site1 rather than /site1? Failing that, is there a better way to reliably run cron.php on a bunch of hosted sites? I'm thinking of writing a script that crawls the sites directory and pulls out the $base_url values for each site and then fires off a curl base_url on the lot of them. It'd be really cool if Drupal's own cron.php had a command-line version, capable of operating on any (or all) configured sites. Any ideas?]]> I’m running a couple of servers full of Drupal sites hosted in a multisite configuration (one copy of Drupal used to host dozens of sites, each with their own sites/sitename directory. I’d been using sympal_scripts to automatically run Drupal’s cron.php script for each site in order to keep search indexes up to date and run other routine maintenance functions as expected. It’s easy enough to drop a curl http://server/site/curl.php into a crontab, but as you start adding sites to the server, it becomes unwieldy to maintain a current crontab of sites to cron.

Sympal_scripts attempts to read through the scripts directory, poking through each site and loading Drupal for each one in order to fire off the appropriate cron.php. It’s been adding records to the Drupal watchdog table, so I expected it to be working just fine. Except it hasn’t actually been running cron.php – it’s been failing silently.

Looks like there’s something funky in the way Drupal refers to the $base_url variable for the site. It’s set in each settings.php file, so it should be as simple as returning the content of a string variable. But it’s borking, and returning the name of the directory containing the site’s settings.php file.

Say I’ve got a server, myserver.com, with a bunch of sites all configured to be served as subdirectories of that server’s main website, such as myserver.com/site1 and myserver.com/site2

Each site has a respective directory within the Drupal installation’s sites directory, such as myserver.com.site1 and myserver.com.site2 (the / are converted to . for use in the directory name because / would be invalid in a directory or filename).

When Drupal is initialized by sympal_scripts/cron.php, it’s getting $base_url values of http://myserver.com.site1 and http://myserver.com.site2.

So, when it goes to fire off the cron task, it’s using urls like: http://myserver.com.site1/cron.php

It works fine on sites configured to run on their own domain, as the domain matches the site directory.

WTF? The http:// shows that it’s reading the value within each settings.php file (or does it?), but why is it retaining the .site1 rather than /site1?

Failing that, is there a better way to reliably run cron.php on a bunch of hosted sites? I’m thinking of writing a script that crawls the sites directory and pulls out the $base_url values for each site and then fires off a curl base_url on the lot of them.

It’d be really cool if Drupal’s own cron.php had a command-line version, capable of operating on any (or all) configured sites. Any ideas?

]]>
https://darcynorman.net/2006/12/29/trouble-with-cron-php-in-a-drupal-multisite-configuration/feed/ 4 1547
Domain squatters suck https://darcynorman.net/2006/10/27/domain-squatters-suck/ https://darcynorman.net/2006/10/27/domain-squatters-suck/#comments Mon, 30 Nov -0001 00:00:00 +0000 http://168672363
I just logged into my Dreamhost account to check on the status (still hasn't finalized - they sure did set it up in a hurry, but it takes a looooong time to switch off of GoDaddy). On a lark, I tried adding registration for darcynorman.com. But Dreamhost's registration utility complained that the domain was already taken.

Mwaaaah? Another D'Arcy Norman out there? Lemme check that out. A quick whois darcynorman.com turned up this:
   Domain Name: DARCYNORMAN.COM
   Registrar: GO DADDY SOFTWARE, INC.
   Whois Server: whois.godaddy.com
   Referral URL: http://registrar.godaddy.com
   Name Server: CNS1.CANADIANWEBHOSTING.COM
   Name Server: CNS2.CANADIANWEBHOSTING.COM
   Status: REGISTRAR-LOCK
   Updated Date: 16-mar-2006
   Creation Date: 16-mar-2006
   Expiration Date: 16-mar-2007

Oh, wait. No. It's a domain squatter. Sitting on my name, assumedly hoping for a portion of the mad cash this blog generates. Mad cash, I tell you. Some lame squatter leech decided to register my name in the hopes I'd pay a ransom to get it back. At least the squatter is using a Canadian service provider to park the DNS for the domain. I guess that's better than having it offshored to Moscow or something.

The combination of cheap domain registrations and "secure/private" registrations where you can hide behind a proxy make this practice possible. When I register domains, I need to go through CIRA verification, accept agreements about usage, etc... But these roaches can register other people's names and park them for ransom. Rules (like locks) are for the honest people.

Screw you, squatter. I just went and registered darcynorman.ca - the only other variant of the domain I'd care about. Go ahead and squat on the rest, you rat bastage.]]>
I’ve been trying to move domain registration and DNS hosting for darcynorman.net from GoDaddy to Dreamhost for a couple of months. It’s been a long and frustrating process, involving faxing my driver’s license to Arizona to somehow prove I am who I say I am.

I just logged into my Dreamhost account to check on the status (still hasn’t finalized – they sure did set it up in a hurry, but it takes a looooong time to switch off of GoDaddy). On a lark, I tried adding registration for darcynorman.com. But Dreamhost’s registration utility complained that the domain was already taken.

Mwaaaah? Another D’Arcy Norman out there? Lemme check that out. A quick whois darcynorman.com turned up this:

   Domain Name: DARCYNORMAN.COM
   Registrar: GO DADDY SOFTWARE, INC.
   Whois Server: whois.godaddy.com
   Referral URL: http://registrar.godaddy.com
   Name Server: CNS1.CANADIANWEBHOSTING.COM
   Name Server: CNS2.CANADIANWEBHOSTING.COM
   Status: REGISTRAR-LOCK
   Updated Date: 16-mar-2006
   Creation Date: 16-mar-2006
   Expiration Date: 16-mar-2007

Oh, wait. No. It’s a domain squatter. Sitting on my name, assumedly hoping for a portion of the mad cash this blog generates. Mad cash, I tell you. Some lame squatter leech decided to register my name in the hopes I’d pay a ransom to get it back. At least the squatter is using a Canadian service provider to park the DNS for the domain. I guess that’s better than having it offshored to Moscow or something.

The combination of cheap domain registrations and “secure/private” registrations where you can hide behind a proxy make this practice possible. When I register domains, I need to go through CIRA verification, accept agreements about usage, etc… But these roaches can register other people’s names and park them for ransom. Rules (like locks) are for the honest people.

Screw you, squatter. I just went and registered darcynorman.ca – the only other variant of the domain I’d care about. Go ahead and squat on the rest, you rat bastage.

]]>
https://darcynorman.net/2006/10/27/domain-squatters-suck/feed/ 15 1475