+1(316)4441378

+44-141-628-6690

How should we measure the effectiveness of NSA programs?

Yale law student here. Embarrassingly.

Not looking for massive rewrites, just working this somewhat rambling set of paragraphs into a coherent paper.

Having a clear, coherent structure is the best. I’m having trouble with that last part. Want to make sure readers can follow (I’d also just appreciate comments on anything you don’t understand.)

Tone: Conversational is good. Rigorous, but not excessively academic.

If anybody can do bluebook citations I would be SO grateful, but I can of course just do that myself. I just haven’t slept for, like 3 days and am running out of steam.

THE PAPER (Basic gist of what I’m trying to say):

Background:
Right after Snowden the government released (and everybody fixated on) this 54 attacks prevented figure. While its intially compelling, its really the wrong way to measure the value of intelligence.

Months later, the Privacy and civil liberties oversight board released its report on the 215 metadata program. Their goal was to determine effectiveness and assess privacy implications — and even though they noted that “attacks prevented” is a bad way to measure value of intelligence… their analysis basically falls into the same trap.

Everybody seems to want a better way to guage value, but nobody really knows what that is. (all of this is the intro)

(then I want a paragraph explaining what the paper aims to do — the current one is not usable as it doesn’t actually describe what the paper goes on to say!)

Foundational question: why is it hard to find good metrics for intelligence?
Basically because its predictive (so the goal is non-events), collaborative (hard to asses any particular program in isolation), and any possible outcomes are really remote from the creation of the intelligence. (Plenty of good intelligence, in other words, doesn’t prevent attacks..)

With that in mind,
Is it even POSSIBLE to measure intelligence using outcome metrics (attacks prevented, early warning)?
– Lots of people in the intelligence community seem to think its not,
– Outcome metrics are also seen as counterproductive in similar areas like scientific research and industrial R&D departments. Like intelligence, these things aren’t closely collected to the final outcomes they hope to produce.

So,
If we shouldn’t measure intelligence based on outcomes (whether it prevents an attack), what kinds of things SHOULD we measure? What qualities would a good metric have?
-probably should be really closely tied to process
-responsive to the specific goals of a program (metadata collection, for instance, aimed to increase speed)]

 

You can place an order similar to this with us. You are assured of an authentic custom paper delivered within the given deadline besides our 24/7 customer support all through.

 

Latest completed orders:

# topic title discipline academic level pages delivered
6
Writer's choice
Business
University
2
1 hour 32 min
7
Wise Approach to
Philosophy
College
2
2 hours 19 min
8
1980's and 1990
History
College
3
2 hours 20 min
9
pick the best topic
Finance
School
2
2 hours 27 min
10
finance for leisure
Finance
University
12
2 hours 36 min
[order_calculator]