UPDATE: EuchreWP post relocated to a new blog dedicated to only content related to EuchreWP.
leobartnik.net
General spoutage and some technical stuff too
Monday, February 06, 2012
Thursday, January 26, 2012
Monetizing your mobile application using banner ads
If you are planning on monetizing your mobile application by including banner ads there are a couple of things to keep in mind:
1) Stating the obvious here, but you need a *lot* of ad impressions (i.e. views/eyeballs) in order to generate any income whatsoever, meaning, of course that you need a lot of downloads, and your users need to use your app pretty regularly. If you have a game that goes viral, maybe that will work out for you, but if you have a fairly niche application that is based on a 100+ year-old card game, published only on one platform like mine (EuchreWP), there is a natural limit to how many new and dedicated users you reasonably may expect to get.
I've come to the conclusion that the almost-nonexistent revenue stream is not worth the hassle caused by the blocking and lag introduced by the advertising component. Already I have seen some oddball behavior that I think is related to that blocking, and the last thing I want is for a bad user experience to tarnish someone's impression of the app.
1) Stating the obvious here, but you need a *lot* of ad impressions (i.e. views/eyeballs) in order to generate any income whatsoever, meaning, of course that you need a lot of downloads, and your users need to use your app pretty regularly. If you have a game that goes viral, maybe that will work out for you, but if you have a fairly niche application that is based on a 100+ year-old card game, published only on one platform like mine (EuchreWP), there is a natural limit to how many new and dedicated users you reasonably may expect to get.
2) There is a fundamental problem with how the content itself is delivered, at least in the toolkit I am using. The ads take a non-zero amount of time, sometimes a second or two, to download from the ad servers. During that time the application becomes unresponsive (for the programmers: the ad control blocks the UI thread). This can lead to behavior that is difficult to reproduce, especially during development, because real ads are not served up if you are running the app in a development environment (testing on a handset is only part of the equation; if you tweak the ad campaign parameters after publishing, like I did, different behaviors can emerge).
I've come to the conclusion that the almost-nonexistent revenue stream is not worth the hassle caused by the blocking and lag introduced by the advertising component. Already I have seen some oddball behavior that I think is related to that blocking, and the last thing I want is for a bad user experience to tarnish someone's impression of the app.
Tuesday, June 07, 2011
Cool car week
One really great thing about living in our tiny little city is every year there is a huge car show on the second Saturday of June. This means that the whole week leading up to the show you start seeing all kinds of cool, rare, unusual cars around town. Just now we saw a Pontiac G8 GXP, a BMW-looking (and performing) little rocket. The GXP is the extra-special version with a 415 hp Corvette engine and 6 speed manual tranny, along with Brembo brakes and a tighter suspension. This is an awesome car all around; don't make the mistake of getting in a stoplight drag race with it, because it'll get to 60 in 4.5 seconds; not bad for a ~$40,000 car.
The car only was brought to the U.S. in 2008 and 2009 (and maybe 2010 according to this article at topspeed.com), imported from Australia as a re-badged Holden Commodore. The run-of-the-mill ones are going for about $18,000 used right now, and I imagine if you can even find a GXP for sale it would be for significantly more.
The car only was brought to the U.S. in 2008 and 2009 (and maybe 2010 according to this article at topspeed.com), imported from Australia as a re-badged Holden Commodore. The run-of-the-mill ones are going for about $18,000 used right now, and I imagine if you can even find a GXP for sale it would be for significantly more.
Sunday, May 22, 2011
Distributed Applications
I read an article by Cory Doctorow in volume 26 of Make Magazine about two interrelated topics: distributed denial of service (DDoS) attacks, and the difficulty of finding a willing hosting provider for your site when it comes under scrutiny from and legal pressure by government agencies seeking information and data by force. In each case, your site, or one upon which you rely, can be slowed or stopped, either by a person or group who apply brute-force technology (whether governmental or otherwise), or by the equally overwhelming force of law. The result in either case is a chilling effect upon the willingness of hosting providers to host your site once it becomes more trouble than it is worth.
It got me thinking about what might be done to prevent or frustrate such attacks; is there a way to distribute not only the network itself, but the applications that run on it as well? Server farms and load balancers don't go far enough because they still concentrate application resources at one provider, even if they have multiple locations, backups, alternate internet backbones, and all the rest of the safeguards that go toward giving them the ability to guarantee uptime.
What comes to mind is a protocol like BitTorrent where resources are distributed not in a client-server way but in a peer-to-peer topology, providing redundancy and distribution of data in a way that would be much more difficult to stop or interrogate than the traditional internet server model. Imagine a DDoS attempt on a BitTorrent "hosted" application: there is no single choke point to attack. What about a subpoena requesting hosting information when the peer hosts are so varied in number and location?
Of course there is a major flaw in this simplistic approach: how do you trust the peers who are hosting part of your application not to poison the torrent, examine incoming and outgoing traffic, etc? Also, the nature of applications today is that they are interactive and real-time, not "download once and run," and even if you did choose to use application logic that lives solely on the client once it is obtained, in most cases in order to be useful at all the application will need to communicate with network resources and possibly persist data in the cloud.
But, in terms just of redundancy, distribution, and the lack of a central hosting provider, that kind of model seems like a step in the right direction.
It got me thinking about what might be done to prevent or frustrate such attacks; is there a way to distribute not only the network itself, but the applications that run on it as well? Server farms and load balancers don't go far enough because they still concentrate application resources at one provider, even if they have multiple locations, backups, alternate internet backbones, and all the rest of the safeguards that go toward giving them the ability to guarantee uptime.
What comes to mind is a protocol like BitTorrent where resources are distributed not in a client-server way but in a peer-to-peer topology, providing redundancy and distribution of data in a way that would be much more difficult to stop or interrogate than the traditional internet server model. Imagine a DDoS attempt on a BitTorrent "hosted" application: there is no single choke point to attack. What about a subpoena requesting hosting information when the peer hosts are so varied in number and location?
Of course there is a major flaw in this simplistic approach: how do you trust the peers who are hosting part of your application not to poison the torrent, examine incoming and outgoing traffic, etc? Also, the nature of applications today is that they are interactive and real-time, not "download once and run," and even if you did choose to use application logic that lives solely on the client once it is obtained, in most cases in order to be useful at all the application will need to communicate with network resources and possibly persist data in the cloud.
But, in terms just of redundancy, distribution, and the lack of a central hosting provider, that kind of model seems like a step in the right direction.
Saturday, February 05, 2011
Unresponsive Mac Mini at login or blue screen
I just resolved a very long-standing problem with our Mac Mini that has been dogging me ever since upgrading to Snow Leopard. The problem was that the machine would stick on the blue screen when logging off one user and before displaying the login screen. Then, even after displaying the login screen mouse clicks were unresponsive, although the mouse pointer still moved in response to user input.
I tried a lot of fixes, some expensive:
1) Install new RAM, taking it from 1gb to 4gb
2) Install a larger hard disk drive, taking it from 120gb to 320gb (now using the 120 on a Linux machine and it reports as being perfectly healthy, as it did within OSX)
3) Added the entire HDD to the "privacy" list in Spotlight to prevent indexing of the entire disk
4) Disabled indexing from the command line (and did not turn it back on)
5) Ran Disk Utility | Repair Disk Permissions
All of those are things that various posters in various forums suggested as a fix to what sounded like the same or very similar problems, but none of them worked. Finally I had an idea to search on terms related to the login screen and fast user switching rather than things like "slow," "unresponsive," and the like. Something I saw caught my eye relating to fast user switching and a difference in the way the Accounts utility presents the options for fast user switching between newer versions of OSX and older ones. Most interesting is the fact that we were not using fast user switching; if I was logged in and my wife wanted to use the computer, I had to log out and then she could log in. I think at one point I may intentionally have disabled fast user switching, thinking that it was a way to conserve system resources and therefore improve performance.
It turns out that with Snow Leopard you don't so much get to choose between *whether* to use fast user switching and single-user-only mode, you just get to choose *how* OSX goes about it by choosing an option in the "Show fast user switching menu as:" dropdown list, which has a checkbox next to it that controls whether you actually use fast user switching. On our machine that option was unchecked, so I checked it and left the default option of "Name" in the list. Here's what it looks like in the Accounts screen:
The difference is like night and day: no more lag, no more wondering if the screen ever will come back, it just works. So far I have tested by locking my screen, logging in as a different user, locking that screen, switching back to my screen, and other tests like that. I also have logged completely off a user and back on as an already logged-on user, and a couple of other combinations that fit the pattern of our normal, regular usage of the machine, and they all still work. What I haven't tried yet (because I wanted to post this blog entry and not risk being unable to log back in immediately) is powering the computer all the way down and turning it back on, which also exhibited the same frustrating behavior.
At the moment I'm not feeling brave enough to test the negative by unchecking the box to see if the behavior returns, but that's my hypothesis. Looking at things from a black box perspective, it would seem that there is some kind of hangup within OSX where it lags like crazy when switching users if you are in Snow Leopard and if you have unchecked the "Show fast user switching menu as:" checkbox. Rather than letting the user check the checkbox that causes all the trouble and does not otherwise seem to control any option, perhaps that checkbox should just go away.
If you are having the horribly frustrating behavior like we were having and have tried everything else to no avail, take a look at your Accounts settings and see if the fast user switching checkbox is unchecked; it's about as simple a solution as you can get to an incredibly frustrating problem, it's just not at all obvious or even intuitive how to go about finding the fix.
I tried a lot of fixes, some expensive:
1) Install new RAM, taking it from 1gb to 4gb
2) Install a larger hard disk drive, taking it from 120gb to 320gb (now using the 120 on a Linux machine and it reports as being perfectly healthy, as it did within OSX)
3) Added the entire HDD to the "privacy" list in Spotlight to prevent indexing of the entire disk
4) Disabled indexing from the command line (and did not turn it back on)
5) Ran Disk Utility | Repair Disk Permissions
All of those are things that various posters in various forums suggested as a fix to what sounded like the same or very similar problems, but none of them worked. Finally I had an idea to search on terms related to the login screen and fast user switching rather than things like "slow," "unresponsive," and the like. Something I saw caught my eye relating to fast user switching and a difference in the way the Accounts utility presents the options for fast user switching between newer versions of OSX and older ones. Most interesting is the fact that we were not using fast user switching; if I was logged in and my wife wanted to use the computer, I had to log out and then she could log in. I think at one point I may intentionally have disabled fast user switching, thinking that it was a way to conserve system resources and therefore improve performance.
It turns out that with Snow Leopard you don't so much get to choose between *whether* to use fast user switching and single-user-only mode, you just get to choose *how* OSX goes about it by choosing an option in the "Show fast user switching menu as:" dropdown list, which has a checkbox next to it that controls whether you actually use fast user switching. On our machine that option was unchecked, so I checked it and left the default option of "Name" in the list. Here's what it looks like in the Accounts screen:
The difference is like night and day: no more lag, no more wondering if the screen ever will come back, it just works. So far I have tested by locking my screen, logging in as a different user, locking that screen, switching back to my screen, and other tests like that. I also have logged completely off a user and back on as an already logged-on user, and a couple of other combinations that fit the pattern of our normal, regular usage of the machine, and they all still work. What I haven't tried yet (because I wanted to post this blog entry and not risk being unable to log back in immediately) is powering the computer all the way down and turning it back on, which also exhibited the same frustrating behavior.
At the moment I'm not feeling brave enough to test the negative by unchecking the box to see if the behavior returns, but that's my hypothesis. Looking at things from a black box perspective, it would seem that there is some kind of hangup within OSX where it lags like crazy when switching users if you are in Snow Leopard and if you have unchecked the "Show fast user switching menu as:" checkbox. Rather than letting the user check the checkbox that causes all the trouble and does not otherwise seem to control any option, perhaps that checkbox should just go away.
If you are having the horribly frustrating behavior like we were having and have tried everything else to no avail, take a look at your Accounts settings and see if the fast user switching checkbox is unchecked; it's about as simple a solution as you can get to an incredibly frustrating problem, it's just not at all obvious or even intuitive how to go about finding the fix.
Wednesday, November 17, 2010
Fun at the dentist
Against my better judgment I told my dentist today that I have a tooth that hurts if I chomp down on something crunchy like hard candy; her reaction to that is to have me bite down on a hard, plastic dental tool thing, which of course hurts like a mother. Is that to see if I am lying?
Just to make doubly sure she targeted the problem tooth a second time, sort of like on those personality profile tests where you get the same question multiple times to see if you contradict yourself. Yep, still hurts when I do that!
Just to make doubly sure she targeted the problem tooth a second time, sort of like on those personality profile tests where you get the same question multiple times to see if you contradict yourself. Yep, still hurts when I do that!
Thursday, November 11, 2010
Arduino vs. Netduino, or How Netduino totally misses the point
I am a developer both professionally and as a hobbyist; I can't get enough of the stuff. One of the things I really like to do after a hard day of slinging code that, if I'm honest, is just slightly less than sexy, is cobble together some real-life hardware components, and then write the code to make them do cool things like read light values, power motors, and make noise with synthesizers. It's a nice change to do hands-on work and I find it enjoyable and relaxing. In other words, I write business apps by day, so I like to play hardware at night.
The hardware I am speaking of is Arduino, an open source hardware and software platform that has tons of libraries, users, and documentation. The development environment is intentionally limiting, and when compared to something like Visual Studio .NET where I do my day-to-day work, it is refreshing in its simplicity. It is actually not very good at all when compared with VS.NET, but it's not meant to be anywhere near the same thing. In its favor is the fact that I am able to use it from any computer in the house, even though that means multiple versions of Windows, a couple of Linux machines, and even a Mac mini.
But, many times over the past couple of years that I have been tinkering with Arduino I found myself wishing that I could write code in the language that dominates most of the work I do: C#. Arduino uses C++, which is at least a familiar syntax, but when you've been used to working with C# and the .NET framework, it's a bit of a pain to go back to the bad old days of C/C++.
So, a little while back I started hearing about Netduino, which is like Arduino in terms of being an open source hardware/software platform, but which uses C# and .NET as its development environment and framework instead of C/C++. Very recently I decided to check it out, thinking that this must be a marriage made in heaven: my favorite development environment combined with really cool hardware hacking; what could be better?
Well, here's the rub: Netduino totally misses the point of Arduino. The hardware is open source, sure, but who in their right mind is going to brave soldering surface-mount 100 pin chips (the Netduino processors)? Who really wants an ARM chip for a microcontroller, even if it would be cool to have built-in USB, LAN, and all the other stuff that the Netduino chips have? The whole point, to me, is simplicity, and the fact that even with my earthquake-shaky hands I still can manage to solder a simple 28-pin chip to a prototype board. Sure, the thing has hardly any memory at all, and no built in USB or LAN, but that's what gives it its charm: I can make really fun little learning projects with blinky LEDs and inputs that read analog potentiometers, and I don't expect it to have more technology in it than a damn cell phone.
I downloaded, but have not yet installed, the Netduino development kit, so I still have hope that I can ditch the fancy ARM chip and use the trusty old Atmega168, but I am expecting to be disappointed.
Here's hoping that I am wrong.
The hardware I am speaking of is Arduino, an open source hardware and software platform that has tons of libraries, users, and documentation. The development environment is intentionally limiting, and when compared to something like Visual Studio .NET where I do my day-to-day work, it is refreshing in its simplicity. It is actually not very good at all when compared with VS.NET, but it's not meant to be anywhere near the same thing. In its favor is the fact that I am able to use it from any computer in the house, even though that means multiple versions of Windows, a couple of Linux machines, and even a Mac mini.
But, many times over the past couple of years that I have been tinkering with Arduino I found myself wishing that I could write code in the language that dominates most of the work I do: C#. Arduino uses C++, which is at least a familiar syntax, but when you've been used to working with C# and the .NET framework, it's a bit of a pain to go back to the bad old days of C/C++.
So, a little while back I started hearing about Netduino, which is like Arduino in terms of being an open source hardware/software platform, but which uses C# and .NET as its development environment and framework instead of C/C++. Very recently I decided to check it out, thinking that this must be a marriage made in heaven: my favorite development environment combined with really cool hardware hacking; what could be better?
Well, here's the rub: Netduino totally misses the point of Arduino. The hardware is open source, sure, but who in their right mind is going to brave soldering surface-mount 100 pin chips (the Netduino processors)? Who really wants an ARM chip for a microcontroller, even if it would be cool to have built-in USB, LAN, and all the other stuff that the Netduino chips have? The whole point, to me, is simplicity, and the fact that even with my earthquake-shaky hands I still can manage to solder a simple 28-pin chip to a prototype board. Sure, the thing has hardly any memory at all, and no built in USB or LAN, but that's what gives it its charm: I can make really fun little learning projects with blinky LEDs and inputs that read analog potentiometers, and I don't expect it to have more technology in it than a damn cell phone.
I downloaded, but have not yet installed, the Netduino development kit, so I still have hope that I can ditch the fancy ARM chip and use the trusty old Atmega168, but I am expecting to be disappointed.
Here's hoping that I am wrong.
Subscribe to:
Posts (Atom)