My Blog Has Moved to audreymroy.com/blog/
audreyroy

From now on, I'll be blogging at http://audreymroy.com/blog/.  Please update your RSS readers!  Thanks.

Posted via email from Audrey M. Roy


Kiwi PyCon, DjangoCon US, and PyCodeConf Recap
audreyroy

I just got back from a long conference circuit, attending and speaking at PyCon Australia, Kiwi PyCon, DjangoCon US, and PyCodeConf.  It was a ton of work, but it was a blast.

I got to blog about PyCon Australia already, but then the time in between the other conferences was just a bit too hectic to blog.  

Kiwi PyCon 2011

I gave the opening keynote speech at Kiwi PyCon in Wellington, New Zealand. The talk was meant to be mildly provocative but in an inspiring, "go out and code" kind of way.

The Pythonistas of New Zealand are amazing. I met more Twisted devs than I've ever met in my life, attended tons of hardcore Python talks by women, and ones by men too, and learned all sorts of new things. I was also blown away by the hospitality of the conference organizers, particularly Tim McNamara and Richard Shea.

After the conference, I spent a couple of days going around the countryside and southern island coastal towns with Danny. It was spectacular.

DjangoCon US 2011

I co-presented the Django Package Thunderdome with Daniel Greenfeld. Here we presented the results of a survey of the most recommended third-party packages of the Django world.

I got to attend with several of the LA PyLadies as well as others whom I knew online through IRC #pyladies and the PyLadies Sponsorship Program. Hanging out in the unofficial PyLadies welcome suite was more fun than should be allowed :)

The DjangoCon US organizers (Sean O'Connor and Steve Holden) let us get away with tons of things, including selling "Djangsta" shirts to benefit PyLadies and setting up a PyLadies welcome table beside the registration desk.

PyCodeConf 2011

I attended and spoke at the first ever PyCodeConf, a new kind of Python conference with a radically different format. Speakers are invited to speak about whatever they desire relating to the theme ("The Future of Python"), in front of a room of round tables. In between talks there are long breaks to encourage discussion. As a result, talks are edgier, and you really get to know people and possibly shape the future together.

I gave a talk about how third-party package ecosystems either form and flourish or don't form, depending on various factors. I brought up packaging patterns and anti-patterns seen in the Python package ecosystem as well as those of other languages.

This was a conference with a superstar lineup, including many notable woman speakers whom the organizers went out of their way to invite.  It was very thoughtfully planned by the organizers of CodeConf and JSConf (Tom Preston-Werner, Chris Wanstrath, and Chris Williams), and the attention to detail really showed.  

The informal chats and bonding during the after-hours parties made this conference especially worthwhile. There's something special about talking to other developers while you're in a 14th floor swimming pool.

Summary

Overall it was thrilling to get my thoughts out there and try to inspire people all over the world. It was also quite nerve-wracking and stressful, but I'm glad I did it.

I learned tons and am already applying much of that knowledge directly to projects at work at Cartwheel Web.  If your employer doesn't already send you to Python conferences, you should ask to be sent.  You come back with experiences, connections, and knowledge that are priceless.

Posted via email from Audrey M. Roy


How to apply those "failed" changes from your GitHub fork queue
audreyroy
This is what you do to apply "Will likely not apply cleanly" changes
in your GitHub fork queue.

git remote add ddurst http://github.com/ddurst/whydjango-ideas.git
git remote update
git pull ddurst master
git status
git push

Posted via email from Audrey M. Roy


Getting outgoing email to work in Django and Pinax
audreyroy
Django and Pinax steps

(I've got Django 1.2, Pinax 0.9a1)

First, install postfix (an SMTP server) on your VPS (in my case, a Linode node or Rackspace Cloud server).
sudo apt-get install postfix

Enter "yourdomain.com" when it asks you for it.  There, now you have your own SMTP server.

Add something like this to your settings.py so that outgoing mail comes from postfix.
DEFAULT_FROM_EMAIL = 'Your Site <yourdomain-noreply@yourdomain.com>'
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_USE_TLS = True
EMAIL_HOST = 'localhost'
EMAIL_HOST_USER = 'yourdomain-noreply@yourdomain.com'
EMAIL_HOST_PASSWORD = ''
EMAIL_PORT = 25
EMAIL_SUBJECT_PREFIX = '[Your Site] '

If your site uses Django but not Pinax, you're done.  To test it, restart Apache or touch your wsgi file, then enter the following 2 lines in a "python manage.py shell" (I hope you're in your virtualenv) at the prompt:
from django.core.mail import send_mail
send_mail('Subject here', 'Here is the message.', 'from@example.com', ['to@example.com'], fail_silently=False)
...and if you got an email in your "to@example.com" account, you're all set.  

Additional Pinax steps

If your site uses Pinax, that last step probably didn't send you an email.

Pinax has an app in it called django-mailer that replaces Django's send_mail with its own queuing version.  Mail gets queued up (you can see it in your admin section's Home > Mailer > Messages) until you or a cron job run the command "python manage.py send_mail".  

Try running "python manage.py send_mail" (still in your virtualenv).  You should get an email.

Make sure you have all the Pinax apps' email settings in your settings.py.  These depend on your particular desired configuration, but the ones that should be True for sure are ACCOUNT_REQUIRED_EMAIL and ACCOUNT_EMAIL_VERIFICATION.

ACCOUNT_OPEN_SIGNUP = True
ACCOUNT_REQUIRED_EMAIL = True
ACCOUNT_EMAIL_VERIFICATION = True
ACCOUNT_EMAIL_AUTHENTICATION = False
ACCOUNT_UNIQUE_EMAIL = EMAIL_CONFIRMATION_UNIQUE_EMAIL = False
EMAIL_CONFIRMATION_DAYS = 2
EMAIL_DEBUG = DEBUG

Now, the last thing you need is a cronjob to send the queued mail every minute, and to retry the deferred mail every 20 min.  How do you create this cronjob?

"crontab -e" opens up an editor.  Enter something like the following 2 lines.  "env" is my virtualenv, and "djangopackages" is my Pinax project directory.  I have a blank line at the end of mine, which you might need.
* * * * * (cd /home/dp/djangopackages; ../env/bin/python manage.py send_mail >> ../cron_mail.log 2>&1)
0,20,40 * * * * (cd /home/dp/djangopackages; ../env/bin/python manage.py retry_deferred >> ../cron_mail_deferred.log 2>&1)

If you haven't picked a default editor yet, it'll give you a choice.  I like nano for this kind of thing because it behaves like a normal text editor.  Save the file and exit.  The cronjob should be automatically installed.  

Now restart Apache and try signing up for an account on your Pinax site.  You should receive an account verification email within a minute or two.  Check your Spam and All Mail folders.  You're done.

If you didn't get one, it's probably an issue with the paths specified in your cronjob or access permissions.  Make sure your virtualenv's set up properly and that the path to its Python is correct.  And make sure ../cron_mail.log and ../cron_mail_deferred.log are writeable.  Try "touch cron_mail.log" from the appropriate directory.  

Other notes

You can set up your project to use Gmail's SMTP server, but Postfix is easier.  Gmail uses a different port for SMTP than the usual port 25.  I think it's port 587.  But this only works for addresses like yourname@gmail.com. I believe.  I don't think you can do this with Google Apps for Domains.

If your Pinax site is live, don't do "python manage.py send_mail" until you've made sure the message queue is clear except for your own email address.  You could accidentally send out emails to all your users, like I did a couple of hours ago.  Those delayed confirmation emails will all have broken links and confuse your users.

Posted via email from Audrey M. Roy


How to optimize large image files for the web, the command-line way
audreyroy
I use Inkscape to draw all my vector graphics nowadays.  Inkscape has an analagous "Export Bitmap" feature that lets you export PNGs, but it's pretty basic (I assume by design).

But what do you do once you've got that 400k Twitter background .png file exported from Inkscape?  

Or what if you want to do better than the image compression options in Adobe Photoshop or Illustrator, or even the GIMP?

Here are 2 decent options, assuming you're starting with a lossless PNG file:
  • optimize the PNG by throwing out unused data and reducing colors
  • convert it to a lossy JPEG (hopefully not too lossy, eek)
GIF is not really an option now that PNG files are well-supported, unless you're making lovely dancing animated GIFs.  

BMP?  TGA?  Other formats?  These aren't good for web graphics.  It's 2010, and web-suited bitmaps are JPG, PNG, GIF.  

Side note: I hope vector formats like SVG will be more supported by web applications (e.g. as Twitter avatars and background designs) in the near future, but it's a steep uphill battle.  I don't think it'll happen until cameras are capable of taking vector photos.

The starting image

Before trying either option, here's the image that I began with.  This is the PNG that Inkscape generated via Export Bitmap:

Option 1: Shrinking the PNG into a smaller PNG

I used OptiPNG to trim the useless data, shrinking my file from 389k to 318k.  This step doesn't lose any image data.  
$ sudo apt-get install optipng
$ optipng twitter_bg.png

In particular, OptiPNG trimmed the alpha data from my PNG, which didn't have any transparency.  These were just wasted bytes.

Then, I used Pngnq to quantize the image.  In other words, to decrease the number of colors in the image to an optimized smaller set of colors.  

(Photoshop and Illustrator let you choose the number of colors in the "Save To Web" tool.  Here we are doing the same thing manually.)

$ sudo apt-get install pngnq
$ pngnq -n 256 -s 1 -e b.png twitter_bg.png

The parameters I chose here are:
  • -n 256:  256 colors, which is the max resulting palette in pngnq.  Less than that would be too little to do my gradients justice.
  • -s 1:  Sample every pixel during its palette-picking algorithm.  The default is 3, but I found that s=1 improved certain parts of my image drastically.  The only drawback is a slightly longer wait (a couple seconds longer for my image).
This 256-color quantization can look pretty drastic if you've got a lot of gradients, as was my case.  But it shrunk my 318k PNG down to a tiny 99k.  Here's the resulting image:

Option 2: Converting the PNG to a JPEG, lossily

You can convert from PNG to JPEG using GIMP if you're more comfortable with a GUI than with the command line.  You have less control over the way the result looks, but it's often good enough for web viewing purposes.  

I don't know what's under the hood of the GIMP, and googling it for a few minutes didn't tell me.  Supposedly resizing images downward in GIMP isn't as good as you can get with the right parameters in ImageMagick.  But this is just a rumor.  If I remember to look into it when I have some free time, I will.  Who knows, GIMP might even be using ImageMagick for downscaling.

Anyhow, you probably have ImageMagick installed already (unless you're on Windows, then you probably want to install it through Cygwin), so just go ahead:
$ convert -compress JPEG -quality 87 twitter_bg.png twitter_bg.jpg

I found that 87% compression was the lowest I could deal with.  But even that gave me a 187k image, and that was with enough blurriness to be mildly noticeable.  Note the artifacts in the rainbow bands.

I'd rather give up colors and have zero blurriness than keep all the colors and have mild blurriness.  Which is why I went with PNG color quantization in the end.

ImageMagick gives you a lot of other compression options.  I tried compressing to various degrees using JPEG2000 rather than JPEG compression, but I didn't notice enough improvement one way or another.  JPEG compression just looks bad when you're dealing with vector graphics or text.  You need those crisp edges.  Lossy JPEG is more of a photo type of compression.

Summary

There, I've written up as much as I could about the 2 options that were reasonable to me.  Pick option 1 if you're dealing with bitmaps exported from Inkscape or other vector graphics.  

But try option 2 if you're okay with some blurriness due to lossy compression artifacts.

Not all image compression is the same!  It's worth experimenting with various image compression tools and libraries.  

And if you try them all and still hope for better compression or quality, do some research on image compression algorithms and a little coding :)

Posted via email from Audrey M. Roy


How to make perfect sweet potato fries
audreyroy
Last month my friend Ruza and I tested various combinations of sweet
potato fry cooking methods. Here are the results of our experiments.
I was hoping to post this as an instructable, but I forgot to
photograph the fries that won the experiment (they got eaten too
fast).

Most optimal: very crisp outsides of 90% of fries, with fully cooked
middles. Only about 10% burnt. I wish I had a final photo of these
ones. You won't find any sweet potato fries as good in any
restaurant.

1. Slice a sweet potato into fries of variable length x 5/8" width x
3/16" depth.
2. Soak uncooked fries in water for 30-45 min.
3. Bring salt water to a boil, then add fries and boil them for 7-10
min or until tender enough to eat.
4. Drain cooked fries and transfer to a pan containing 3 tbsp hot
olive oil. Cook on low heat roughly 10 min or until they transform
from floppy to crisp, flipping as needed.

Very delicious and interesting low-fat, extra-sweet (due to
caramelization in oven not added sugar), chewy fries:

1. Same as above.
2. Same as above.
3. Coat baking sheet with olive oil spray. Lay uncooked fries in a
single layer on baking sheet. Coat top surface of fries with another
light layer of olive oil spray. Sprinkle with salt and pepper.
4. Bake at 250 F for 2 hours.

Far less tasty variations:

Pan-frying the fries without boiling them first results in crisp but
40% burnt fries. The fries' insides take longer to cook through,
leaving the outsides to burn in the meantime.

Pan-frying over medium-high heat rather than low heat results in 66%
burnt, 34% undercooked fries. Burnt fries still taste great, but they
could be far greater.

Soaking uncooked fries overnight, then boiling and frying them removes
most of their sweetness. Unsweet sweet potato fries are disgustingly
inedible. They look good, but it's false.

Baking fries at 375-400 F for 40 min without any pre-soaking results
in well-cooked but non-crispy fries. They become floppy and fall
apart from their mushy floppiness. They taste great, especially when
eaten with a fork like typical oven-roasted vegetables. In contrast,
regular potato fries cooked in this way become crispy.

Other notes:

I don't know if the 30-45 min of soaking makes a noticeable difference.

Smaller sweet potatoes are easier to handle while slicing.

Posted via email from Audrey M. Roy


Serving static websites with Amazon S3+CloudFront, GoDaddy, Nginx, and a VPS
audreyroy
S3 is good for serving static files in situations where you may need
to deal with unexpected spikes in traffic. You can use any S3
uploader such as S3Fox or S3 Organizer to create a bucket for your
files and then upload them. This requires no programming.

With a couple more clicks, you can turn an S3 bucket into a CloudFront
bucket. Then, the bucket can be used as a simple content delivery
network. I pointed http://cdn.fuzzyrainbow.com to my CloudFront
bucket by editing my GoDaddy control panel cname settings for
fuzzyrainbow.com.

You'd be able to use the S3/CloudFront bucket as-is as a static web
server if it weren't for Amazon's inability to serve default index
pages. I fiddled with S3, CloudFront, and GoDaddy for a bit, but I
couldn't get http://www.fuzzyrainbow.com to automatically serve
index.html.

My quick solution to this was to install Nginx on my VPS to serve the
default index page and the .js files. I put those files into a
directory on my VPS and edited my nginx.conf to serve those files.
The images and stylesheets linked from index.html are retrieved from
http://cdn.fuzzyrainbow.com, my fancy little CDN.

I've put up the files for http://www.fuzzyrainbow.com here:
http://github.com/audrey/fuzzyrainbowcom

Posted via email from Audrey M. Roy


Pulling personal data out of OpenSocial containers and into a standalone website
audreyroy
I've been struggling with the OpenSocial docs and various samples,
trying to find a way to pull my personal data out of my Orkut profile.

First I tried out the OpenSocial Python Client library samples and the
Google Friend Connect Chow Down sample. I didn't fully understand
what was going on, but I saw that I'd need a consumer key & secret.
(I just learned about GFC yesterday and am still trying to figure out
what it can and can't do.)

I created my own gadget.xml, uploaded it to a server, and added it to
my Orkut sandbox profile page. I verified my ownership of gadget.xml
with Google's "Gadget Ownership Verification" tool, at
https://www.google.com/gadgets/directory/verify. That gave me my
Orkut gadget consumer key and secret.

Then, I discovered some interesting info here:
http://sites.google.com/site/oauthgoog/2leggedoauth/2opensocialrestapi
1. Orkut only supports 2-legged OAuth.
2. A 3rd party site containing no gadget needs to use 3-legged OAuth
to retrieve a user's Orkut profile data.

What is 3-legged OAuth? For example: your website has a "Login with
Twitter" link that sends you to Twitter for approval, upon which
Twitter sends you back to your website with an access token.

In contrast, a 2-legged OAuth example: your Orkut (or Hi5, Ning,
MySpace, whatever) gadget requests data from your own personal API
server, for use in the gadget itself. In this case, your gadget uses
a shared secret from the OpenSocial container to sign its requests.

I guess I have 3 options now:
1. Give in and have everything live inside of an Orkut gadget
2. Create an Orkut gadget that pushes my profile data to my server and
then sends me to my website
3. Switch to another OpenSocial container that supports 3-legged OAuth
(if any exist) or to another social media site that has it (Twitter?
maybe Facebook Connect?)

To be continued...

Posted via email from Audrey M. Roy


{Filename?} Price It By Phone, and the Twilio API
audreyroy

I won the Twilio+AppEngine contest with Price It By Phone, an app that lets you look up Amazon.com book prices by touch-tone phone.  Right now it's up and running at http://price-it.appspot.com.

If you try it and run into problems, please let me know!

My interview with Twilio is here

I like the Twilio API a lot.  It's the easiest-to-use API imaginable.  You set up your Twilio phone number with a URL to post to.  Then, when you call the phone number, Twilio sends a POST request to that URL with the caller phone # and the digits entered as parameters. 

They have a Twilio-AppEngine sample among their demo apps.  This is awesome.  As you can see, I'm back to really liking GAE again, a lot.

I'm putting a bit more time into Price-It and hoping to launch v2 soon.  Features that I want to put into it for sure:  support for ISBNs with Xs and any other chars that appear in them, user accounts, verifying your phone # before you're allowed to see your book lookup history, being able to delete your history.  Possible features to be added: integration with Amazon wish list, FB connect. 

Also, I want to make it look web2.0 shiny, with cute illustrations and bright, designed gradients.  Yes, I know that the dark, trippy background doesn't have mass appeal.  Sometimes I make art for the sake of making myself happy :)  View the page's source code to see how it's done.

If you have other feature ideas, I'd be interested in hearing them. 

Posted via email from Audrey M. Roy


Jeremiah Teipen & Benjamin S. Jones at Satori Gallery, New York
audreyroy

Earlier this week I had the chance to visit some of the galleries in Manhattan's East Village/Lower East Side.  

My favorite by far was Gallery Satori.  They have a main space and a project space (i.e. a side mini-gallery).  The main space is currently filled with large mixed-media sculptures by Benjamin S. Jones.  I was instantly drawn to these pieces, which look like exploding, radiating architectural models.  One piece has graphic foam arrows flying out of it, and another is like a sea urchin of high-rise and smaller buildings.

(1800 KB)
Watch on posterous

In their project space is a series of "found-media" videos by Jeremiah Teipen.  The videos are extravagant collages of bits and pieces of video and animation from the web.  Teppen's biography refers to his pieces of "pure visual gluttony," a description that I thought was vividly perfect.  He takes the most gluttonous parts of the web (e.g. MySpace comment "bling" graphics) and scrolls them across his video pieces.  It is a bit hard to describe.  The videos remind me of Jeff Koons' work.  You really should see them while they're up at Satori if you can.

(19221 KB)
Watch on posterous

I like how Gallery Satori shows artwork that teeters on the edge of being too uncomfortably experimental.  In contrast, for the most part the other Lower East Side galleries were either too conservative or over the deep end of experimental.  I was also very impressed with the artwork's presentation, in a way that I don't know how to explain.  It just felt right.

 

 

Posted via email from Audrey M. Roy


You are viewing audreyroy