== Q&A from Andrew Ivanov at practice talk at SUSY Meeting (04/15/09) Q 1. - Is you signal MC a combination of chi1-chi1 and chi1-chi2 processes generated for specific GMSB parameter point? Why is in the Table 2 of the note 9575 the production cross section is a function of chi0 mass? What are the actual parameters do you vary when you generate your signal MC ? A: As described in Section 4 of the cdf note, all processes are simulated. For SPS 8 all masses scale with SUSY scale (L) so we pick neutralino mass as our free parameter for simplicity, as described in Section 1.1 and 4. The parameters are described in Table 1 in the note. To get different chi0 masses we vary SUSY scale(L) and messenger mass scale(m), but fix the ratio m/L=2. This is also described in Section 4 of the note. Q 2.- How soft are the taus produced in your signal MC? Since there are several of them in the decay chain, I imagine that the most energetic tau might produce high-Pt isolated track in the event. If so, one could divide the pre-selected sample into two regions: with and without a track, EWK and QCD dominated respectively, with loosened selection cuts, and that could increase sensitivity to your signal. Did you consider that? A: We did consider identifying final state taus and/or isolated tracks as a way to improve sensitivity. As described in Section 1.1 in the note we have selected to keep this search as general as possible, including these in the jets as part of HT optimization, for now. We may consider this in a future search. We will use this trick in the updated Delayed Photon search Q 3. - I didn't quite follow how you justify that photons will have 100% trigger efficiency. I also don't see where you apply the systematics for that. What number did you use in the previous version of the analysis? How fast the efficiency drops with energy due to chi_CES cut and wouldn't you be affected with turn-on curve from the high photon Et triggers? A: We used 100% trigger efficency in the previous analysis. Our analysis isn't affected with the turn-on curve since our photon ET's are so high. Also note we use the .OR. of four triggers (DIPHO_12, DIPHO_18, PHO_50, and PHO_70). We put a good paragraph on this and photon ET distribution after all kinematic cuts in Appendix C of the note. Also see cdfnote 9533 on diphoton trigger efficiency. Q 4. - You showed that the acceptance drops once you include the minbias for your signal MC. But the MC does not have enough min-bias events. First, due to the different luminosity profiles: the data has higher luminosity. Second, the MC used to underestimate min-bias by 20%. I am not sure if the latter problem is fixed in the sample you generated. Most analyses are not very sensitive to that, but since for your analysis the acceptance drops by quite a bit, higher luminosity profile may decrease it further. Perhaps to estimate that you can check how the acceptance scales as a function of number of vertices in the event, and then re-scaling the number of vertices in MC to the distribution in data should give you the size of this effect. A: Appendix B is where we detail the changes since the 2.0 blessing. There are many effects that reduce the acceptance from the first blessed version of this analysis (blessed on 11/06/08, version 2.0 of the cdfnote) beyond the addition of the min-bias in the MC. These are described in detail in Appendix C in the note. We have a tighter cut on dphi, which removes signal. We add vertex swapping and met cleanup cuts, which removes signal. Also the Figure 24 (acc vs. run) in Appendix C in the note shows there is no drop as a function of luminosity, so min-bias effects are negligible. Q 5.- Z(nu-nu) contribution. You use [86,96] mass window from Z->mu-mu which could be too narrow, and might underestimate the actual predictions. During the talk you said that you apply high uncertainty for that, but it looks it is only 30%. Well, perhaps just generating Z(nu-nu) will solve all of the problems. A: Z(nunu)+gamma sample is produced and we now have the estimation of this background, as described in Sections 3.3 and 6. Q 6. - You have a quite detailed description of systematic uncertainties for the signal, but it is very brief for the SM backgrounds. In particular, your QCD uncertainty that used to be 100% in the previous analysis is now 50%. Is it due to more statistics? A: The background uncertainties are described in sections 3.2, 3.3, and 3.4 of the note. The dominent QCD uncertainty is from statistics of the sample and is determined by the number of events that passed our final kinematic cuts from 10-pseudo experiments. That being said QCD is not the dominant background, nor would reducing the overall error on the estimate significantly help our sensitivity. This is now made more explicit in Section 3.2.1 in the note. Q 7.- I asked that during the meeting. How many events do you end up with for the EWK MC samples after the final selection. Is the stat uncertainty that is applied to EWK prediction due to MC statistics? A: No. The final EWK background prediction has contributions from stat. and syst., which is 0.92 +- 0.21(stat.) +- 0.30(syst.) as described in Tables 18 and 19 in Section 6 of the note. More details about the number of events passing the cuts are in Table 18 in the note. Q 8. - How do you generate pseudo-experiments for the final optimization? If MC statistics is low, then the pseudo-experiments can be highly correlated. In other words, if you divide your MC samples by two sub-samples and perform two independent optimizations would you arrive to the same final selection cuts for the best expected limits? It would be good to check if your a-priori analysis is invariant to that. You can do the same exercise for QCD background. A: As described in Section 3.1 of the note we use 10 pseudo-experiments for just the QCD Met Model prediction. We used the CDF standard limit calculation tool, taking into account the errors on the backgrounds. It is very unlikely that we could have a minimal by fluctuation between old(2.99) and new(2.79) cuts in, for example, dphi cut. Also limits are almost flat in that region. Again the errors are large and taken into account in the expected cross section limits and QCD is not the dominant background. Q 9. - Fig 14 of the note has some number of events around the value of met sig ~= 12 It just looks a little strange that the distributions for both ewk and signal MC drop around metsig=6 and then some extra events appear as a peak at high met sig values. Is it some pathology of met modeling? I understand that you can just put all of the events above 6 in the over-flow bin, but I think it would be good to understand what kind of events produce such a peak. Say it is a bug in the code, for instance, then you would like to fix that, because it does effect your limits. A: We changed the metsig plot to have overflow bins at metsig=10. As described in Section 4 in the note there are subset of events with low metsig (<7) due to the fact that while the non-interacting particles are highly energetic, they might not have small eta, or there are two (or more) that point in opposite directions and cancel each other out, giving small Met. The second region about 7, including the overflow fins at metsig=10, is due to event with large Met. Any detail above 7 is significantly affected by the estimation techniques in the Met Model and should not be taken to seriously. This is why we make the overflow at 10. See entry 04/29/09 at http://hepr8.physics.tamu.edu/elee/ggMet.html for the study in more details.