Page 15 of the “Risk-Limiting Audit Implementation Workbook” describes how RLAs are made easier with the help of audit software designed to help election officials manage the data RLAs rely on and perform the calculations to conduct the audit. Based on your experience with an RLA tool, or after reviewing the diagram on pages 16 & 17, what additional features should be considered for developing a universal RLA tool?
This is an excellent question. And I dare say we can get some good ideas going here.
But note that there are already lots of feature ideas and discussion of previous features and incomplete work and bugs at the original github site for the software: Issues · FreeAndFair/ColoradoRLA. Recent development hasn’t been updating the issues there, unfortunately. I urge the community to bring development more into the open and use that site for not just bug reports, but also feature requests and the details and implementation discussions.
@nealmcb thanks for pointing out the work done on the FreeAndFair site! As you know, VotingWorks is now leading the development of RLA software built on the pioneering work done by Colorado, FreeandFair, and Democracy Works. They anticipate having a github site soon to facilitate technical collaboration from the RLA community. I’ve posted this question hoping to gather and document ideas from a user perspective (state and local officials, audit board members, etc.) about how the tool should look and function. My guess is the discussion will be spread between both technical and non-technical discussion groups and sites.
If sensibility prevails, there will always be more than one source of support for any software used for auditing, and multiple instances of software and even components for audits supplied by different suppliers.
(I hope there will never be a presumption that one company can take care of audit software for the US or that a company is necessarily needed. I hope everyone recognizes the need for various competing audit solutions.)
And there will I trust always be a fail over mechanism that is applicable to local use and readily available.
The software audit mechanism currently implemented will require it’s own audit and what Colorado has used so far does indeed need an audit to verify that it is doing what is expected.
To do that audit of the audit software, the CVRs must be made available to the public and they must be defensable by the authors or administrators of the devices that produced the CVRs. Colorado does hash each county CVR package but not in a divisible way that would permit CVRs to be carved up by contest as may be needed. The RLATool software is currently presumed to have aggregated all CVRs for at least the “driving” contest, but we have no way to verify a successful software aggregation by external independent means.
on other topics:
I read part 1 and saw several uses of the word transparency but nothing about public access to process or records that are needed to complement and complete the credible role of the tabulation audit.
I saw stated that a statewide RLA cannot be done without software. This troubles me. Also troubling is the advice to put very little in statute - even the risk limit. I’ve think much more than CO put in statute is needed: scope of contests, ballots to be included, who decides crucial policies and when, public reporting and access to process and records, timing of release of audited results and sampling decisions and completion before certification - all that in addition to what CO included (and hand count in the definition of incorrect outcome too ). CO has a lot of built up expertise other states don’t have. They need more statutory support than CO did, and CO is still using compromises that we can only hope are not permanent.
I read in Part 1 an overbroad definition of RLA and a kind footnote about a heuristic interpretation. The first few pages never indicate the scope of RLA is only tabulation but instead I see the usual claims to ability to correct outcomes. The last few pages do say tabulation audit - that is much better. And there is a helpful side bar about other things to audit. That side bar doesn’t yet include computerized and manual eligibility determination used with remote voting methods. Signature verification ( and lack of it) represents a really big potential source of outcome error that makes the claim that “RLA limits the risk of confirming an incorrect outcome” solidly incorrect. ( unless the definition of incorrect is cleverly limited to tabulation to make the statement true). Unfortunately this somewhat embarrassing reality does deserve treatment in this important document.
Otherwise I think the part 1 is very helpful.
Sorry I don’t know how to use thread topics here yet
Thank you for your thoughtful commentary, Harvie!
We’ll need to work out good practices here over time, but my sense is that in this Discourse forum, it would be most helpful to break out several “topics” from what you’ve said, so it is easier to read about a particular topic. You can create a new topic from the home page. You could even then link to them by editing or replying to your original message.
For organizing the discussions, the Discourse folks recommend using tags much more than categories:
Normally a Discourse system is set up to gradually allow users an expanding set of options and permissions, via Trust levels. See e.g.
The defaults may be different here.
Thanks @nealmcb! I like the idea of tags. I will work on getting some set up this week. I’m relatively new to Discourse so I will try and get up to speed.
@harvie thanks for your feedback. I agree with @nealmcb, there are several topics here and they are all worth discussing in greater detail. I will try and create some new categories and/or tags and move some of the issues you brought up to their own thread. In the meantime, let me try and provide a few brief answers.
There has never been any presumption that VotingWorks/Arlo would be the sole audit software available to states and local jurisdictions. Thus the reason everything will continue to be open source as well as open to collaboration.
You are not the first to bring up the need to audit the audit software and I recognize that it’s an important piece of the RLA puzzle to solve. You and I have had previous discussions during my time in CO about why making all CVRs available to the public can be problematic. I will create a separate topic so we can discuss the issues in depth.
Regarding your concerns about transparency and public access to records. Please take a look at page 29 of Knowing It’s Right Part 2: Risk-Limiting Audit Implementation Workbook. I tried to touch on some of the information that should be made public. Happy to take input to enhance this list in subsequent guides.
I stand by my statement that a statewide RLA cannot be done without software. I’m happy to have you point out a state that has carried out such an endeavor without the assistance of software.
Most of the items you recommend be committed to statutorily, I recommend be adopted through administrative rule. (See page 18 of Part Two of the RLA guide.) This is still a pioneering endeavor and administrative rule provides a more flexible way to define policies and procedures. More than 10 years ago an RLA law was passed in NJ committing them to an RLA with a risk limit of 1% for federal and gubernatorial elections and a 10% risk limit for all other elections. Changing that statutory language is difficult. My current feeling on the risk limit is setting a maximum threshold rather than adopting a fixed percentage. This gives states the flexibility to set a risk limit based on the sampling method they are using and the scope of the election.
Regarding my RLA definition. I received quite a bit of feedback from a number of experts who felt comfortable with the RLA definition I published, as well as the other terms and definitions in Part Two. This is probably going to be one of those things where we will never all be in agreement. What I would encourage everyone on this discussion board to consider is how important it is that we have standard terms and definitions and we use them consistently in discussing policy and practices with election officials. I firmly believe it is one of the bigger hurdles to wide-scale implementation.
Regarding #4 your challenge is unfair. Show me first any state that has conducted a statewide risk limiting audit. I believe that software solutions must be unbundled so that manual fall back or fail over is possible for specific components - PRNG is likely the one component most difficult to accomplish without software. All other components must be doable without software or verification is also impossible.
One step that would be extremely difficult to do without software is the validation of the CVR data against the reported results, especially in sizable jurisdictions. This validation could be done, clumsily, with off-the-shelf software like Excel but customized software would allow additional sanity checks on the CVR data.
I’ve spent the last 6 weeks attending a number a state and national election conferences talking about RLAs. It seems clear to me that one of the pressing needs right now is an add-on to the RLA tool being developed by VotingWorks to allow state and local officials to enter the results of a contest from a prior election and adjust the risk-limit and the sampling method (ballot comparison, batch comparison, ballot polling) to see approximately what the workload might look like for a particular contest with the different variables.
The question I’m asked repeatedly is how efficient is an RLA compared to my current fixed percentage audit? Having a sandbox tool or demo tool might allow election officials to go in on their own and see what an initial sample size would look like for any past election. Especially helpful when they want to understand what a tight margin might look like (in terms of workload) and how that relates to the set risk limit.
Thoughts? Would this be helpful? Are there other features that might be beneficial to add?