soulshaver wrote:
Its debatable whether it fits the technical definition of open-source, but I'd hate to be arguing about simple semantics.
Ok. I think you're not getting something. I've been working with unix systems since before Linux was invented. I don't need to research this. I already know that unix kernels were not open source. Period. I also know that most of the distributions were not open source either. It's not like you'd run into a bug with the HP-UX automounter, log onto their site and just download the source code to see if you could identify the problem.
Virtually no vendors work that way. Not the ones who want to actually make money. And guess what? Even most linux distributions don't work that way either. They're a bit closer, but not much. Take it from someone who's actually been involved in testing and debugging kernels and utility software for unix and unix-like systems for nearly 20 years, the idea that Unix is more secure than Windows is because it's "open source" is laughable. Linux inherits much of it's security from the superior design of unix, but there's nothing about open source itself that makes it more secure in any way.
I don't know how many different ways I can explain that. Allowing anyone to see the source code does make it easier to find and fix bugs (but has some negative effect on development direction if you're not really careful). It allows you to build a competitive product cheaply (cause a bunch of people are giving their time to the product for "free"). But it absolutely has no positive effect on the security of the resulting product. If anything, it makes it less secure because anyone who might want to try to hack it will be able to figure out how to do it.
It's always easier to find holes in software than it is to fix them. If for no other reason then you can't fix a hole until you find it. Thus, fixing already requires finding, making fixing harder to do (or at least more time consuming) than fixing. When it comes to finding and fixing bugs, the open source methodology works quite well. When it comes to finding and fixing security flaws? Not so much...
Quote:
That wasn't the point. The point was that if we can actually see the code that is used and study the design of the machine then we can verify that it works correctly. Thats called transparency. I don't care if its technically open source software or not.
Well. Transparency can mean a lot of things. Usually, it's about transparency in the "process", not necessarily the nuts and bolts. I want to know how my data is secured by a vendor, but I don't really expect him to tell me exactly which encryption system he's using much less what hash keys he's using.
I simply don't see the real value to demanding this. It's one of those things that sounds great when people who don't really understand security and proprietary systems repeat them, but don't really buy you anything. Look at it another way: You drive a car, right? Do you have open access to ever single bit of design material involved in the construction of that car? Or do you just have an operating manual?
Do you feel your car is less safe because you don't have access to the details of it's design? The point I'm getting at here, is that over time any product becomes better because of feedback from the market. In the case of electronic voting machines, the same thing occurs. Companies that make good products get their product purchased by the states who want/need them. Those who make crappy ones get dropped for a competitor. The people can apply pressure to the purchasers (the state governments typically) based on those choices.
The only real flaw with using this same mechanism is that there are many people who for some political reason or another don't want electronic voting systems to be put in place. They then convince other people that electronic voting systems aren't safe and aren't secure and insist that ludicrous requirements be placed on their use. The net result isn't to create a better election system, but a worse one. We can speculate as to what the motivations are behind that, but it is the end result.
And no amount of opening of source code prevents it. If history is any indicator it'll just open the issue up to more argument, and further confuse and delay the adoption of good quality and "secure" electronic voting systems in this country. Again. We can speculate as to why anyone would want that, but it is what ends up happening.
Quote:
I do think that is an issue worth debating, however. If they wanted to retain the proprietary copyright and not have their code stolen by competitors, we could form some sort of congressional panel of several bipartisan experts who could sign confidentiality agreements and examine the machines and the source code behind closed doors to verify its integrity.
Or we can focus on the end result. Just as with a car, we measure it's safety features based on how it performs in the real world, we should do the same with electronic voting systems. We don't need to know what's going on under the hood. We just need to demand specific security features that work. So a good paper trail is important obviously. More reliable input systems are important. We can push for those improvements without demanding that the vendors open up their source code to us (and should).
The code vault type idea you propose doesn't buy us anything either. At the end of the day, the vast majority of the people simply have to trust that whatever experts are looking at and designing the code that runs these things have done their job properly. And honestly, the last people I'd trust to do that would be a political organization. Let each company design their own systems and then compete with them in an open market. That's how you get the best results.
Opening the code would only result in "dueling experts", with the manipulation of the public perception being the goal. If I can convince enough people that my competitors system is crap, maybe they'll buy my system instead, right? It's a really really bad approach, since negative feedback is all that's going to be effective and that tends to result in adoption of worse technology over time, not necessarily better.
Quote:
The point is that currently over 1/3 of the electoral college vote comes from electronic voting machines that do not produce a paper trail, so there is no way you can verify the vote. In light of recent evidence of voter fraud with these machines I think that would be alarming to some people.
Yup. But you're ****-blocking your own issue, aren't you? If you believe that all electronic voting systems should have paper trails (and just about everyone agrees with that), then just push for that one thing. By lumping in a bunch of other arguments and claims and demands, you increase the likelihood that nothing at all will get done, and absolutely decrease the likelihood that a good change will occur, much less the specific one you care the most about.
That's the problem with the blog you linked to originally. It wasn't about proposing solutions, but simply blasting electronic voting devices for being insecure or inaccurate. It's goal is simply to spread FUD (Fear, Uncertainty, and Doubt) about e-voting. Period. By linking to it, you're supporting that approach. My whole point is that this is the wrong way to do it. If you want to make elections more secure, then instead of focusing on a laundry list of things you think are wrong or may be wrong, focus instead on just the areas you think need improvement and the specific improvements that need to be made.
So you want paper trails on all voting machines? I agree 100% with you. And if you'd restricted your original post to just that, you'd likely have gotten nothing but a chorus of "I agrees" from everyone.