Viewing a single comment thread. View all comments

PaulSnow t1_itqqh5t wrote

Why don't we require all electronic voting to be done with open source hardware and software for true end to end auditability and transparency?

43

TheOfficialACM OP t1_itqusk8 wrote

The current business model of elections is that the vendors have no requirements for open source, but they do have the requirement that their systems are subject to certification and testing. The certification process requires the vendors to share their source code with the testing labs.

For what it's worth, there have been a number of attempts at doing an open source voting system that could be commercially viable in the U.S. market, but none of them have achieved significant market share to date, except perhaps the Los Angeles VSAP system, but the source code isn't actually open yet (article from 2018, but I don't think anything has changed since then).

(I do consulting with another open source vendor, VotingWorks.)

30

PaulSnow t1_itrgk17 wrote

Hence require open source. It isn't about being commercially viable, if not providing an open source product means it isn't commercially viable.

9

TheOfficialACM OP t1_itrlcw4 wrote

Here's a more concise way to put it: I would prefer if we did not have trade secrets in elections. Let the vendors copyright and/or patent their stuff, but the source code should be open to public inspection. This isn't about security, per se, as much as it's about transparency. If you want to get nerdier about it, it's also about publicly verifiable reproducible builds, which has ramifications for security and transparency.

15

PaulSnow t1_itrmgg9 wrote

In this case, transparency is security (more review) and verifiable reproducible builds is a given.

[in addition, ] Open source hardware is a critical component here.

Edit: *Added "in addition"

5

e_to_the_pi_i_plus_1 t1_itqseft wrote

Part of the issue is that elections are managed by municipalities. States have different rules and inside of states many counties have their own rules.

For a long time, hiding source code was thought to improve security. Voting machines are expected to last 10-20 years so it takes time to move to more modern notions of what makes something "secure."

19

dratsablive t1_itr5jh4 wrote

Not just time, but the motivation and funds to change.

6

PaulSnow t1_itrgzm2 wrote

Open source hardware and software is the only way to rid ourselves of accusations that are made about voting machines like we saw in 2020.

And it isn't an entirely baseless fear. We do know software is often compromised, and we even know hardware is often compromised.

The most secure software in the world is open source, and the best way to build forward with secure voting software with rich features is to ensure everyone can develop on a common base.

5

borktron t1_itslz64 wrote

While I'm generally in favor of well-understood and battle tested open-source hw/sw, it's not really a panacea. How do you know the build of the open-source software hasn't been tampered with? How do you know that the physical machines actually in use conform to the open-source specs and haven't been tampered with?

Of course, you can mitigate some of those risks by allowing stakeholders to inspect, verify hashes of builds, etc. But that's a lot of human-factor stuff on top that you're absolutely relying on.

So even in an open-source hw/sw world, RLAs are still critical.

2

TheUnweeber t1_itsw9xz wrote

Although open source isn't a panacea, couple it with trustless ledgers, and the more parties distrust each other, the more nodes they (and nonpartisans) will run. ..and that's nearly a panacea.

2

PaulSnow t1_itudneq wrote

This is exactly the point. Fewer truly independent code bases, increased distribution of knowledge of the code, more tools deployed for automated verification/validation of the code, etc.

Proprietary code usually ends up devolving to the point most of it is treated like a black box. This is because knowledge of the internal code is restricted. And then over time the institutional knowledge is lost as people quit the effort (nobody is immortal).

At least with open source, knowledge can be distributed over larger bodies of people, and more experts can exist for the entire ecosystem to leverage. For applications where no "secret code" or "secret sauce" is required and in fact is nothing but a danger, Open Source is the solution.

2

PaulSnow t1_ittge4u wrote

I am a big fan of RLAs. Basically we ran the election in 2020 in a way very few statistical tests could be run to compute a confidence level on the ballots.

However, software builds can be hashed and signed, and open source hardware can refuse to load unsigned builds. But how to evaluate the signature? This is where small cryptographic proofs from blockchains provide a distributed ledger.

The hardware and the software can be reviewed by everyone earning money in the voting game, and when disputes arise, there is no excuse to demand access to the voting machines because everyone has access by definition.

Open Source solves both pragmatic transparency issues, and political ones.

1

billy_teats t1_its8mb6 wrote

Go be fair, open source software is increasingly becoming compromised. Some of the modern attacks would not stand against election systems, like someone taking over an old domain that didn’t get paid for so they can get someone’s custom email so they can reset the password to an account that owns a repo. Or someone making a pull request with a bug fix and also oh ya here’s a little call out to a C2 server.

−5

PaulSnow t1_ittguvh wrote

Not really. A tiny inactive project can run all those risks,sure. But voting software to be used in the US is going to be a big, active project. And many interest groups will be willing to pay for reviews of the source.

Every change sticks out like a sore thumb; hiding an exploit in a bug fix is more of a movie plot than a reality. Automated testing and source analysis will pick up any call out of the software with no human intervention.

1

Natanael_L t1_ittyuja wrote

The issue remains proving the hardware runs that software and that software only. No extra chips, no modified chips, not even tweaking semiconductor doping, and no exploits against the secure boot mechanism.

Even game consoles and the iPhone and sometimes HSM's fail at this.

1

PaulSnow t1_itucs77 wrote

If the hardware is modified, this can be detected. And deploying the hardware should be done with the consideration that the voting machines themselves are hostile. So keeping hardware off networks, using fixed communication channels, blockchain tech (which prevents processes from accepting data that isn't properly registered, does not go through fixed processes), etc. remains critical.

Proving security is impossible, but pragmatically it is possible. The unique requirements of voting software make it far easier to secure than any device that requires networking to be functional.

The most secure voting system is one that doesn't allow voting at all, preventing any exploit to capture or corrupt ballots. Since that isn't an option, we do the best we can. Which can be very good. None of the exploits discovered to date lack some process to address them.

1

Natanael_L t1_itufvgn wrote

In practice it's the paper copies that's the best security measure. It really isn't feasible to audit the hardware in full at scale.

1

PaulSnow t1_itwczyd wrote

Have we forgotten Florida already?

1

Natanael_L t1_itwew4c wrote

Do you think every voting machine in Florida can be xrayed?

1

PaulSnow t1_itydl6t wrote

Not sure what xraying voting machines is supposed to do.

1

Natanael_L t1_ityfenv wrote

How to you think hardware tampering is discovered?

1

PaulSnow t1_itziv5a wrote

Through testing, architecture, and audited manufacturing.

Auditable manufacturing processes at every level.

Altering chips requires massive changes in workflow and processes.

Certification of manufactures (Not having your hardware manufactured in suspect countries like china).

Hardware design that separates keys and security from general computing. Embedded hardware testing and verification.

Hardware can be architected to be self checking, such that changes or alterations do not produce the same timing and values as the proper hardware.

https://www.securityweek.com/closer-look-intels-hardware-enabled-threat-detection-push

I can't find any reference for detecting hardware modifications with x-rays.

1

Natanael_L t1_itzm4n5 wrote

Did you not look at the link I provided above?

1

PaulSnow t1_iu0g3ko wrote

I don't remember a link to talking about x-rays, and a review of the history didn't reveal a link from you I didn't read.

So what am I looking for?

1

Natanael_L t1_iu0m81m wrote

https://www.reddit.com/r/IAmA/comments/yd7qp6/i_am_the_coauthor_behind_acms_techbrief_on/ittyuja/

https://www.infona.pl/resource/bwmeta1.element.springer-147a2312-2fe6-3a08-9954-a904e950f9bb

> Instead of adding additional circuitry to the target design, we insert our hardware Trojans by changing the dopant polarity of existing transistors. Since the modified circuit appears legitimate on all wiring layers (including all metal and polysilicon), our family of Trojans is resistant to most detection techniques, including fine-grain optical inspection and checking against “golden chips”.

1

PaulSnow t1_iu23xnr wrote

Your first link is just your post, and it doesn't mention x-raying anything.

The second mentions optical inspection and checking against "golden chips" isn't x-ray, and there is no reference to x-raying hardware here in the abstract. And I don't have a subscription to read the paper.

1

Natanael_L t1_iu2a0dm wrote

https://spectrum.ieee.org/chip-x-ray

And optical inspection is common - and less capable in detecting attacks like manipulated silicon doping

1

PaulSnow t1_iu2kxqj wrote

The article does not say they can detect doping. Their test was a flaw in a interconnect layer.

But great. You would do a statistical examination of batches of chips. Done. Their process is destructive.

1

billy_teats t1_ituh1hz wrote

>>Hiding an exploit in a bug fix is a movie plot

Well this is taught up wrong

1

PaulSnow t1_itvxdnw wrote

The kind of exploit you describe (making a call out over the network hidden in a bug fix) is in fact very unlikely. This is pretty easy to find in code that is reviewed and tested as with most Open Source projects.

Especially applications like voting applications that have no networking functions.

1

TheUnweeber t1_itsvzfu wrote

..and, what about trustless ledgers? This is probably the safest direction to be heading.

1

lpd1234 t1_itttw1o wrote

Just use paper ballots FFS.

2

PaulSnow t1_itucynn wrote

Because we have never had issues with paper ballots. /s

0

lpd1234 t1_itwfqej wrote

Elections do not occur often enough in most societies to warrant an electronic ballot system. Many places that have tried have gone back to paper. It is not infallible, but it does represent something most people trust. Now, if you have segments of your society actively undercutting the legitimacy of elections, that might be something to pay more particular attention too.

1

PaulSnow t1_ityejte wrote

Fair and reliable voting is the goal. We know paper ballots don't solve all problems. Nor does electronic voting.

As to increasing doubt of our election security....

Both the left and the right are casting doubt on elections in the USA.

It's the level of conflict between the far left and the right that's at the bottom of this. Once you have demonized the other side past a certain point, how can they work together to have fair honest elections?

And if the other side is a literal threat to the future, what is a justifiable limit in what you will do to keep them from power?

At least we are not assassinating public figures at the rate we did in the 1960's yet. But how far away are we? Given the level of rhetoric we've heard since Trump got involved?

2