Connect with RIPE 59 : Facebook Twitter dopplr RSS linkedin

These are unedited transcripts and may contain errors.

The DNS Working Group commenced on the 8th October 2009 at 11 a.m..

CHAIR: Good morning everybody. This is the DNS Working Group. If you were expecting to have discussions about Internet exchanges and peering policies and stuff like that, I think you are in the wrong room, you might want to go next door.

Before we get things underway, first of all, there is the proceedings are being recorded on webcast, so whenever you come to the mikes to make any comments or questions, please give your name and affiliations. This will help the scribes, the nice lady who is doing the transcription services, and it will also help the people out there in webland who are following the proceedings and may not be fully aware of recognising whose voice is whose. Make sure your phones and pagers are on silent mode.

As, you know, we have got the usual stuff here, Peter has put up slides here. The obvious things. There is the home page for the Working Group. How to contact us through the mailing list. And the co?chairs for this Working Group, myself, Jim Reid, Peter Koch is going to chair the second session this afternoon and [Jackerhaus] is, I hope, somewhere hiding in the room at the moment.

Having looked at the agenda, has anybody any comments to make on the actual ordering that we have here or any other suggestion for agenda items or things that we have with otherwise not got covered?

Okay then. Next thing is we have got the minutes of the RIPE 58 meeting, and these were posted very late for which your co?chairs have to apologise. It's their inability to do things rather than the NCC. We got the minutes early enough and collectively we dropped the ball and actually getting the minutes updated and checked and circulated. However, I apologise for that and consider your co?chairs have had their wrists well and truly slapped and make sure we don't make the same mistake in future. Peter is going to go through the Action Agenda List and any action items are things that were discussed on the RIPE 58 minutes?

PETER KOCH: Thanks Jim. Good morning. And I have the interesting task of going through some of the action items. You'll find the full action items list, the full action item list under the URL posted on the slides. The slides are available online as well of course.

Why do we have two numbers for this action item? Well that's cut and paste, but that also happened last time which means that, the background of this was the NCC found out that the Trust Anchors for the NCC maintained zones had appeared in the ISC DLV and there was some discussion about this back and forth and actually at RIPE 57 and then subsequently at RIPE 58, the NCC received the task to work on this with ISC and give us some update and background and actually that's going to happen in the RIPE NCC update that Anand will be giving this afternoon.

Then we had two other action items arising from last time. The first which is 58.2, also originating from Anand's. Then RIPE NCC report, where they found out that some of the objects in the database were actually causing more trouble than use. That is child objects of ?? or being registered where the registrant or the LIR submitting the stuff would expect these to ?? provision these zones or to reflect this information in the respective reverse zones at the NCC, but since the parent zone was already provisioned there, these child information would have no effect to the confusion of everybody, because there is no error report or nothing. I guess that will be covered in Anand's report as well. And we just talked to the ?? talked about this in the database Working Group, and the DB group of the RIPE NCC and Paul is sitting in the back, I guess, sorry for putting you in the spot, they are working on this to clean this up, there is a couple of numbers, several thousand objects affected here which do not have any effect. So I guess probably on the list or at the next RIPE meeting, we can have a progress report on this. It has actually received another action item number in the database Working Group if memory serves correctly, is that right Paul?

Anyway, it's been taken care of there.

Then another issue that arose from Anand's report and then from subsequent discussions which we unfortunately had to cut down a bit last time, but we'll continue this afternoon, was about the lame delegation progress, or the lame delegation detection project carried out by the NCC. We had some discussion, Shane made several comments and had a proposal and the NCC was asked to dig further into this and do some research, and they cooperated with Shane and we will see the presentation this afternoon, and I guess Anand will also touch upon this in his report this afternoon. So for now, we can't close any of these, but probably can do so after this afternoon's presentations.

There are some other older action items we can report progress on. One is that there was the Ref serve attributes which in ancient times probably had some function documenting the reverse delegation name service for the address objects. The latest, after the introduction of the domain objects for the reverse tree, the Revenue serve attributes and the a nut them objects didn't have any particular use so, we came up with a proposal a couple of RIPE meetings ago to get rid of this. This was taken over to the database Working Group. They agreed. The RIPE NCC received an action item there. Everything is done.

These ref serve attributes have been converted into remarks only so we can close this one.

Another one that is still being worked on in the database department, which is probably of interest here as well, since we touched upon it, that is forward domain objects in the RIPE database, so we have a couple of TLD objects either referring to the TLD WHOIS server or some random second/third level domain objects in the RIPE database and the agreement here was, for the sake of consistency and data maintenance, quality and everything, to get rid of this information in the RIPE database, this is still on the desk of the database group in the NCC, and ongoing action item and progress will be reported next time. And that should be basically it.

Any questions?

Thank you. Back to the agenda, and Jim.

CHAIR: Next up we have Carsten Strotmann talking about an update of the IETF reports.

CARSTEN STROTMANN: Good morning. An update on what's happened in DNS related space on the IETF which happened end of July in sunny Stockholm.

I am starting with the DNS Extensions Working Group group and there was a lengthy discussion about these two drafts which stem back from the Cominski issues last year, last summer, which handled about all the ideas which can be done to make DNS more resilient. Some of the ideas being in this document have been seen as not optimal or might be hurting more than they can solve, or help the DNS.

So, there was kind of discussion around this, whether it is dangerous to put them in an RFC because that might be kind of a midas touch that here is something that's not good turns into gold by putting it in an RFC but it might turn out to be a fools gold. So there was an afterdiscussion, some other options being presented, what to do in the Working Group and the minuses and the pluses indicate the amount of humming or silence in the room where minuses mean more silent and more pluses mean more humming in the room. And you see that for the option do nothing, there was silence; to adopt the first draft, there was even more silence. It is like I had the feeling during the issue that, this is like nuclear waste: It's there, but nobody wants to touch it.

Then the second draft, there was also silence, but not as much as for the first draft, and there was some adoption, some positive feedback, but not much; a little bit to adopt both drafts and merge them together and work on them further. So we have to see what happened there in the next time.

Then there is this new draft about the overhaul of the EDN?0. That got a new editor working on this and there was some interesting data presented about a tool, or in general he presented by DNS path MTU, that is how to measure the packet sizes being able to work with on a line of DNS servers, like a line of resolving DNS server that a DNS query might go through, because all the DNS servers might have EDNS capabilities and packet size capabilities and there is a tool called DNS funnel that you can don't load as part of a larger package from this URL there, where you can test the DNS path MTU yourself and do some testing on that.

Then there was very little discussion on a topic that is being discussed in the behave Working Group, it's called behave DNS 64. It's basically about synthesizing quadA records from A records for from an IPv6 only network to be able to access IPv4 consent. Andrew Sullivan the co?editor and also Working Group chair asked for feedback from the DNS extension Working Group to give feedback to the behave Working Group and that is worked on.

Then Shane Kerr presented a draft about IXFR only which deals with incremental zone transfers. To date incremental zone transfers works in a way if DNS server asked for a zone transfer and the other party, the master, cannot deliver in an incremental zone therefore it will send just a full zone transfer. Which is not always the best solution for it, because a slave name server might have multiple Masters and the slave might want to try multiple masters if they can defer an incremental before it received a full zone transfer. Especially in large zones. So this new edition, this new query time it be IXFR only and if the master cannot deliver and incremental zone transfer, the master should just send back an error message instead of a full zone transfer.

There was a mild hum in the Working Group to adopt this document, and it has been discussed further on the mailing list.

And then more larger discussion happened about the charter changes. Mostly about a DNS RFC guide, because it was seen that for, especially for developers, new developers, it's almost impossible to get an overview on how to implement DNS correctly because there is so many RFCs, some are obsolete, some are updated, some are obscure, some are not complete. So the idea was to create a yearly summary of all the RFCs that are needed for implementers. There was a big discussion on which form this document should be, should it be an RFC, a live document or anything else? The result is something is needed but is should not be an RFC. Now the big discussion is how an IETF Working Group can work on anything else which is not an RFC.

Then we have a presentation a single query type that go return both A and AAAA records, so to get rid of the overhead that we have today, because a client might first ask for a AAAA record, if that's not available ask for an A record so. Idea is to combine both in one query type. The Working Group indicated that this is interesting, work needs to be done on this, but not this draft. So there will be probably more work being seen in the IETF on this one.

And then there is a new draft on signalling the algorithm on the standard or desired by a validating resolver. Because, having multiple different algorithm, encryption algorithm in DNSSEC can lead to really big package sizes, so there might be a good idea to have the resolver indicating what algorithms the resolver understands, so that the DNS server can just respond with the algorithms and not with everything available.

This is being worked on and Paul Hoffman volunteered to write an Internet draft on this for consideration.

So, that was the DNS extensions Working Group. Now coming to the DNS operation Working Group. There was a drafts which deals with all the implications between various timing parameters in DNSSEC. This needs still more work and there will ab new draft on the IETF 76 which will be in November in Hirosh may.

Then there was a presentation about the idea of trust history for local validating resolvers, which is especially useful like for lap tops or local home users that have validating DNS resolvers, but they might be off line for a longer period of time and might miss some roll?overs, so they need to have a wire to pick up with the data they already have and the idea is to keep some kind of trust history, that is the old DNSSEC key somewhere, so that the resolvers can catch up and can forward to the current active DNSSEC keys there.

This document has been accepted as a Working Group document. But it's not clear whether that will be in the DNS operations or DNS extensions Working Group. So that is still in discussion.

Then we have something about DNS redirect, which talks about different ways of playing with DNS traffic like man in the middle, what do providers do and also other parties might do like legal enforcement agencies. This is not about making it like a standard document. It's just saying DNS redirection is a good thing. It's more like if you need to do it, how to do it, to do the least harm in the way, but still there was, as you can imagine, a like lively discussion of to do, whether to write this in an RFC or leave it outside the IETF.

And then there is a new draft about DNSSEC policy statement framework. This draft is especially for people who need to write a DNSSEC policy in their organisation, and this should be like a guidance document for people doing that.

And another draft being there for how to populate the IPv6 reverse tree automatically. As you know some ISPs today populate the possible, very verse three tore the DSL customers with just numbered names in there. That is not possible any more in IPv6, because it's just too much data. And this document here describes some ideas how to cope with that, like automatically synthesizing reverse entries in the IPv6 reverse zones.

So what else happened there besides the two Working Groups? There was a DNSSEC panel organised which ISOC, which was stuffed with these people,. Matt Larson from VeriSign talking about the route zone signing, we have also heard that on Tuesday here that it will be signed in 2009 and then officially available in 2010. Also information was given out at that panel is being planned to be signed in 2010 and dotcom in 2011. And also Richard lamb from ICANN was there and presented some ICANN's idea about DNSSEC.

And last but not least, there was open DNSSEC pre?release in Stockholm, this is an open source software for managing DNSSEC procedures. It's quite an interesting software, if you need DNSSEC signing your zones and handle key roll?overs and stuff. You can download that from open And I would like to thank the open DNSSEC team for the software and also the nice release parties they gave in Stockholm. That's all from me. This is the link to the starter pages from both Working Groups.

One more thing: There is the Internet meeting in November in Stockholm. On the 6th November there is a full open DNSSEC tutorial, so if you are interested in running open DNSSEC in your networks and you want to get up to speed very easily, come to Stockholm on the 6th November and there will be a one day tutorial on this.

So that's all from me. Thanks for listening.


CHAIR: Are there any questions for Carsten?

SHANE KERR: This is Shane Kerr. I wanted to point out that Anand draft also includes a proposal to let ISPs know that they don't have to provide a reverse for IPv6, because it's not possible the way you do it today, so that was what was a somewhat contentious suggestion but it may be interesting for people at the RIPE meeting.

SPEAKER: It was not possible to cram everything in ten minutes, I had that on my list.

CHAIR: Thank you very much. Thank you Carsten.


CHAIR: Okay, our next speaker is Dave Knight from ICANN/IANA who is going to give us an update on IANAs ITAR and stuff of that nature.

DAVE KNIGHT: I am Dave Knight. I work in the DNS operations group at ICANN, I am giving this update this morning on behalf of Kim Davies from the IANA. He can't be here.

So we started testing the ITAR in October 2008. It went live as a beta three months later. The ITAR only contains gaining material tore TLDs, so only those delegation which is in die course can be secured in the signed route zone. TLD operators submit and remove the quays through the website here, IANA staff then obtain consent for the changes from the add minute and tech contacts listed in the route zone database. Once approved the trust anchor will be published in the listing period and there's a matching DNS key record visible in the TLD zone.

There have been 61 change requests submitted. Had 2 thrust anchors have been approved and listed. There are currently 20 TLDs listed in the ITAR. 5 change requests were actually rejected by the domain contacts, now, as far as we know for those, it wasn't people were trying to do bad things. They recognised they made an entry mistake and retried. 7 failed to respond to the approval request. Approximately 10 thousand down loads of the ITAR every month. Last week there were I think 291 unique IP addresses pulled the ITAR.

One of the more interesting observations we have from the past year is that some operators are managing to send us invalid Trust Anchors. It's good to see that the submission procedures and technical checks that are in place are catching these. This is going to be instructive to the process of forming the DNS submission procedures forth signed root zone.

In future, we want to add for proactive notifications, reminders of impending key listing period expiry. More useful debug information.

Further, now that there is an actual date for signing the root, we should start thinking about the future of the ITAR, investigate decommissioning and things like that. These are resources permitting. Right now the priority is getting the root signed.

We are working on new procedures for DS record handling for the signed root zone. And because we have an existing root management procedure, we can't just directly transplant the ITAR model to that. We'll be proceeding the procedures for hanling DS record submissions to the root early next year.

And that's it for this update. Are there any questions?

CHAIR: Are there any questions for Dave?

AUDIENCE SPEAKER: The technical test you are doing, how far are you? Are they finished yet, on the ITAR?

DAVE KNIGHT: I think that's going to have to be a question that I'll have to defer to Kim.

AUDIENCE SPEAKER: Because there is one entry in the ITAR which doesn't make sense at all. So... that's something you want to look at, or Kim.

DAVE KNIGHT: I can talk about it after and get that back to Kim.

AUDIENCE SPEAKER: Question from Jabber. Roy Evans from NomiNet asks would it it be possible for a TLD to submit a DNS key to IANA with the caveat that it does not enter the ITAR?

DAVE KNIGHT: Again, I don't know I am going to have to take that back to Kim. Sorry.

SHANE KERR: Shane Kerr, ISC. This maybe another question for Kim, but you mentioned briefly decommissioning the ITAR. Is that a goal or?

DAVE KNIGHT: There is currently no plan to do it. And I think we are eager to hear opinions from the community on how that should be handled, if it should be handled.

SHANE KERR: From my own personal point of view, I'd prefer for the ITAR to remain indefinitely. I think it's a useful crosscheck once the root is signed.

PETER KOCH: Peter Koch, following up on what Shane just mentioned, we had this DNSSEC key taskforce set up here and came up with some ideas, recommendation ?? its findings these days, about the how ITAR could look like and the exit strategy was mentioned there, and seeing the announcements and following this and actually liking the ITAR so far, there is also the question of tier down and exit strategy poisoning came up. I think that's a key issue that needs to be addressed somehow. Maybe open ended, with different opinions, but that actually makes it more important to discuss this, to have an exit strategy or not very early in the process, given that well ?? July 1 next year, people should know where to go and are not stuck with the ITAR when they should use the root zone or either way.

DAVE KNIGHT: Sorry, I am not sure if there was a question in there I was to respond to.

PETER KOCH: Please don't leave this dangling. I guess the exit strategy or the notion that there will not be one, that needs to be very careful early communicated.

DAVE KNIGHT: Yes. Certainly there will be lots of notice when any actual decision is taken. As I said, there is a lot of eagerness for input from the community on this.

LARS LYMAN: Lars Lyman, can I only strongly recommend that if you go for keeping the ITAR, please make sure that there is only one entry where you make up dates. So that you split the data on your side, so we don't have to make up dates both to the ITAR and to the root in the future, because that's a recipe for mismatch.


CHAIR: Just before we close off this thing, Peter mentioned about the ITAR tasks force that we set up roughly about a year ago, and we set a list of attributes and sort of suggestions over what kind of characteristics are Trust Achor repository should look like and prodly speaking there is consensus inside the taskforce that the IANA ITAR meets those objectives and requirements that's fairly closely matched with them. In fact I think some of the documents we produced or the statements we produced were very instrumental in actually doing the work and scoping out the ITAR that they did.

Now, I think we are at the point where we could probably declare the taskforce to declare victory and wind itself down. I think we are almost at that point. We are trying to jump ahead a little of the gun here, but I think it maybe appropriate to not just wind up the taskforce just yet. It may be a good idea to have to Trust Anchor repository taskforce, continue albeit in a zombie state that doesn't do anything, but just keeping a track of what's happening with root signing and what's going to happen with the ITAR once the root is fully signed and if there is going to be a transition and the ITAR does either get shut down or act as some kind of mirror for the signed root. I think it's appropriate to leave the taskforce in abeyance with the idea of just keeping an eye on the developments that might POP up over the next year to 18 months or so. Can I get a sense from the room if that seems like a reasonable thing to do or not. I seize a few heads nodding. I'll go with the usual rule, silence implies consent.

Okay. Thank you very much and thank you Dave.

CHAIR: The next speaker is Stephane Bortzmeyer from AfriNIC who is going to give an update on his tools which he has called DNS witness.

STEPHANE BORTZMEYER: So, for those who were at the meeting on Sunday, the same talk, so you can go to sleep right now. For those who were in Dubai at the DNS Working Group one year ago, there are two different ?? the main difference in this talk is that I will mostly speak about the new passive monitor, and don't worry about the number of slides when you think of the length of the slot.

I will skip most of the slides.

Okay. So the idea of DNS witness is a general term that we use to mention all our statistics and measurement tools. It has two important parts: Active monitor, which was presented in Dubai last year, on and a passive one which is the main subject of this talk.

So, the active tool send DNS queries to see what's in the zone for instance, to the other AAAA for www.domain. On the passive tools, when it's just PCP them on the main server on putting the data in a database for analysis.

So, this is a sort of thing that we can measure with the passive tool. A few things about practical problems. As you all know, if you try to put all the data from your name servers anywhere, it can take a lot of room. If you just TCP them minus W, all the requests to your name servers, you will need a lot of disc and to process it you will need a lot of memory. So the most common solution is to use sampling. The general idea of DNS witness is not to send the Pcap files to a big factory somewhere with a lot of disc, a lot of power to process it. The idea is to move the programme to the user, not the data to the analysis centre and then the user can perform his own analysis with his own data without having to send, possibly send the data outside. That means that unless the person doing the analysis has a lot of machines, memory, etc., you will need something to process this data unless you are very rich, and in that case, you don't need to be here.

Sampling ?? a good thing about the development of DNS witness is at the same time that we developed the programme, the IETF in Working Group, developed a separate RFC describing sampling, a framework for packet sampling on general techniques for something such as count based sampling, random sampling, etc., etc.. I strongly recommend to read at least RFC 5474; it's very interesting.

Of course nothing is perfect in the world. If you use sampling you will have sampling errors. If you remember the university when the teacher talked about statistics, etc., it's the time to read your old university books again.

Also, sampling has a limit, has a serious limit when it comes to security. If you want to track a phenomenon which is uncommon. For instance, DNS attacks with dynamic update on binding July of this year, could you tier down, it was possible to tier down the name server with just one packet. If you do sampling, you may miss this packet. So if you want to do securities to these, sampling is probably not appropriate.

Okay. Now, what we do. What we developed. So DNS mezzo. It's the name of the passive component of DNS witness, have three parts. One is to do the capture. We use P cap programme used at ISC. Then a desector which takes the data, picks up files, puts them in the database. And then we use a set of reporting programmes using SQL to query the database.

I'll skip most of the slides about implementation for lack of time. But just remember, if you do it yourself or if you assign a student to write a desector for pick up files, remember that it is very dangerous because in the real world, many many packets in the file are badly formed. You find a lot of funny things, for instance for the DNS, you can find a name compression pointer that goes outside of the packet. So if you blindly follow it, you will have a buffer overflow or something like that. So be careful.

More interesting. We put everything in the db MS, the main reason being everybody knows SQL, everyone can do an SQL department. And it makes analysis easier. You don't have to write a programme for each analysis.

Of course, if you are ambitious, you can do it not with SQL, but with map reduce algorithm, with the add up software, things like that. But with us today, we use only boring, SQL. Here is an example of some SQL request. For instance, if you want to find the non existing domains that are requested, it's funny information, you simply select the domain ?? you simply select the domain field from the packets. When it is not a query. And when the response code is 3, meaning NX domain.

Let's keep the performance issues.

And let's skip to the results.

One of the goals of the DNS witness system was to make long term surveys. The ability to study things over, not only several months but also several years. That's one more reason to use sampling, because we can not keep the whole flow of data for many years. But, for this talk, I will not be able to show you long terms, because it's too recent. DNS mezzo AFNIC is running since April, or May, something like that.

So what we do at the current time, we sample at 1% random. We take a collection during 24 hours. Unfortunately we have only one name server which is instrumented at this time. It's on the to do list to add more. And we capture with pick up dump. Now, the results:

Every starts with information about IPv6, so I will do like everyone else. What is a percentage of IPv6 request on dot F 4 name servers? At the present time: 0.6%. Remember, an authority I have name server, what we manage, only see record source, typically bind name servers at ISP, local networks etc., so 0.6 does not tell us nothing about the desktop machines which are behind. It's only the percentage of resolvers in France that have IPv6. It will be interesting in the future to see if it increases. I'll come back later with more data, at the next RIPE meeting, it will be a good opportunity to ask for a new trip.

Interestingly, many of the other statistics do not seem to depend on IPv6 versus IPv4. If you remember, the talk by Google at Dubai, at the RIPE meeting in Dubai, they observed more IPv6 use during the weekend, because probably users have more IPv6 connectivity at home than at work. We did not see yet any correlation between the percentage of IPv6 on anything. For instance, clients, which are not patched against the Cominski vulnerability, which do not have SPR source pores, they have the percentage of v4 versus v6, my hypothesis was that IPv6 clients were managed by people more intelligent and more clever more serious, etc. It does not seem to be the case.

Of course, when the percentage is so low, 0.6, you don't need to be a God in statistics to infer that you have to be very careful when you interpret anything.

Another funny measurement is the size of the response that we send. Here you can find on the X axe I say the date, so it started in May of this year. And on the Y axis, the percentage of request per size. The big one, the green one is between 128 bites on and 255. It's the most typical size. As you can see from the numbers, we don't sign dot F 4 with DNSSEC. Once we do, the figures will change a lot. It's to be able to be measure it in the long term. You can have a very similar picture with DSC, if you use a very good DSC programme, you can have a snapshot of the activity on your name servers and you can have among all the statistics that DNSSEC can do. The interesting thing about DNSSEC mezzo, since you store this thing this the database, you can sort this in the past, even far away in the past, as far away that we have enough hard disc to store it.

Another typical survey that it is interesting to do with DNS mezzo is the most popular domain, it was an actual question from the management: What are the most important dot F R domains? Important being not easy to define of course. It seems a very simple question. But ?? by the way, you can also do it with DSC but this is an option that you have to activate and which slows down DSC a lot, so we don't use it. Because when you Mangan authoritative name server, you don't see the actual requests by the users, but the requests sent by the DNS resolvers is he ISP. So it may change the picture a lot.

What we saw that the most popular domain in dot FR were infrastructure domains, domains which are used for hosting name servers, the name that you can find on the right?hand side of the NS records. So for instance, no one knows magic dot FR but there are many many domains in .FR with DS 1 dot magic FR etc. So it's one of the most popular domains far away from the content domain like google.F4, microsoft.F4, etc. Of course the most popular domain is nick.F4 because because it's the one we use to name all name servers.

So the management discuss very simple questions, what are the most popular domains and they got the reply of 20 pages detailing all of the possible answers.

Another interesting study is to see are we better now with respect to the calm inski ability. Are all the name servers patched at last to have support? No. 18 percent of the big clients. I say big clients because we don't take into account small resolvers, resolvers that have IP addresses with only one or two or three requests. To be among these clients, you need to send 1,000 requests in 24 hours, which is not so bad. And still 18% of these clients still do not have SPL. And they are not only smaller and unmanaged resolvers, because together they make 15% of the requests. So we still have a lot of work to do here.

DANIEL: Just immediate question, if you just filter on just the pure number of requests, you might get skewed by misbehaving resolvers, right, that keep asking the same question, did you filter that out? Brought brought no, we don't fillet these, what we filter out was requests which are not buying the resolvers, which are typically your students running dig in a tight loop, this sort thing, but some of the other possible treatments are not done yet. Also, of course, in the short time of this presentation, I cannot describe all the methodology which is often a problem, yes.

AUDIENCE SPEAKER: Daniel: I understand that, just a suggestion. If you just look at the sheer number, you might not filter out some stuff you might not want to filter out.

STEPHANE BORTZMEYER: For some of the first offenders, I checked to see it was a typical resolver behaviour. In one case I tried to find the IP address in the RIPE database and then I send e?mail, phone call etc.. most of the time of course I got no reply, but that's another matter.

Okay, the last ?? not the last, but almost the last. This one is about the query type. A, NS, MX. Basically what you can see from this it all the nice types, resource types that are invented by IETF, etc. Are not used a lot in the real world. Most of the requests are still for the good old A and MX. The reason the surf is using because MX is very influenced by spam contents. At times there is big exam campaign, we see a lot more MX request and then it comes back to normal.

I don't have the time to have a detailed comparison with other systems, but don't worry, we studied other possible systems. Of course the one which is closest from DNS mezzo is the one by our Swedish colleagues, DNS 2 db, so if you plan to install such a system on your network, it's not only for TLD, it can be used by anyone who manages DNS servers, you have to consider this too. They are nice competitors.

One last thing of everything is distributed and free software licence at www.DNS I skip on future work, but in Dubai last year I promised to come back with longer term trends about active monitor this time. And here are the results for IPv6 enabled dimensions in .FR, what we do is we take every domain in .FR and with ask them about everything like WW.domain, do you have a AAAA? And we check the MX, is one of the MX has a AAAA? So the blue, the pale blue curve is a number of domains which at least one name server with a AAAA. You can see a survey going from December 2008 to now. This seems to be very good for IPv6 because the number of domains with at least one IPv6 name server goes to less than 2%, to more than 7%. Unfortunately, we see it only for name servers. For web servers, or for MX we don't see such an increase. And for the case of MX, there was even decrease of the number of IPv6 enabled domains for unknown reasons yet, but I am sure we will have questions about it, so I'll stop the talk now.

Questions welcome.

CHAIR: Thank you.

Any questions? Okay. I guess we're done. Thank you very much.

Next up we have Edward Lewis.

EDWARD LEWIS: I have ?? I gave the title of this slide originally to Jim saying what would operators do about provisioning DNSSEC. I forgot I had that really good title so I had written the slides with topic: DNSSEC operations, SEP provisioning. Jim also said I had a small time slot. I wanted to get these slides out with background material so that people not that familiar with the situation can read back and definitions of what ITAR is and SEP and those terms that we see. I just want to go through the slides written down the bottom.

We need a standard in this area, we had a lot of failures in the last year in term of DLV and the tar and people are saying things are out of date. Things are not working is because we don't have a standard way of sending SEP keys, KSKs around, we are losing track of what's going around. Part of the it the producers of this don't know where the receivers are and the receivers aren't keeping up to date and there is no timing. We don't have any one way of doing this. I really think we need a standard if we want this to grow. Without standards we only have those who know each other play along. We want to get this beyond that stage.

One problem I have, I worked in a company that both does a registry where where he take in these keys from our customers eventually, but we also run a DNS hosting service where we have to give the keys out to other registries and we don't even have a good way to talk among the company. We can do it. We want to do it the same way for everybody else. We don't want to have our own little internal pipe.

I am going to go into the situation where I am just trying to change the SEP and this is an approach. The scenario that I started thinking about which is: I have a running zone and I want to go to the next KSK or SEP key, so I need to swap this.

And the problem I have is the swap part. When I want to tell the parent of my zone I have a new SEP key, I want to be able to say here is the key and first of all get a feedback saying that they heard me. And also like to know that they are actually, they enact the change. I need to know that change has happened at the top in their area. And what I need to address here is the security of this. I don't want the parent to just accept any keys from anybody. They have to get it from the right place and the right time, basically when you know it's coming. But also service level is if I am in an emergency I would need to have this done quickly. What's nigh envelope? What's the amount of time I must allocate for the response when I am doing nigh roll over plans?

RFC 5011 is Trust Achor management which sounds. This should just do what I want. The problem is it doesn't is doesn't have a confirmation step. It has a way for the producer of the key to say here is is key coming. Take a look. If no onecomplains in a month you can count on it. When you revoke it you want to get rid of it. There is never any confirmation that those listening to you are going to do the revocation and so on and move on because this is an open ended situation they are trying to solve.

Now to try to give a graphical look at what I see is the problem is that when I start out with a KSK or an SEP, a brand new one, I am going to publish it in my system first from my provisions down to where I do look ups. I then want to explicitly tell the parent I am not using this key. I want them to know this is not by implication. I want you you no know I am going to this next secure entry point. The way of telling the parent could also be through a look?up in the DNS but I don't want to get into a solution. I want to have an explicit notification to go to the parent saying this is the change. The next thing is the parent or the TAR, whatever I am talking to, has the process that new record, whether they sign it or put it in the zone or distribute it through whatever mechanism they do. Next step is it's confirmed, that I have seen that all those I have told about the key now have all changed the key. So that on my end I can go to the next step, which is to get rid of the old key.

That's basically what I envision is the flow of the data.

So, that I see is the data flow problem. The situation we run into is trying to solve this, we run into the usual debate about there is so many way people have interfaces between those who run websites. Those who run DNS for them, those who register names in the registries, how registries are put together. We have the ICANN model, we have everybody else's model, and I tried to break these boxes out here, and what I see is this is the generalised model. This is not necessarily going to identify the flows, but we have to figure how this works. I have slides that I am not showing which are playing around with what I think is actually in play in someplace. We have the blue box, the blue arrows of kind of somewhat solved. The right hand blue arrow in some places it EPP protocol. Again that's not everyone's flavour. The green dash arrows are the things I think we need to solve. How do we get the date that comes from operating the DNS into the parent or the tars. And I don't mean necessarily directly but we have got to get the data through all the pipes to get over there.

And I want to remind people, this is about provisions data. This is not look ups. I want to get into the provisioning side of the house, whether those who check things at Trust Achors or whether it's the registries going through their database and so on. We have to get these through the provisioning side of this. These are the requirements. This is what I really want to come up with at some point. I am not saying these are all the requirements. These what I think we have to so. We have to dot functionality, make sure we get this data back and forth, whether it's a swap, add, modify delete, whatever. Security has got to be there, we have to make sure that only the authorised folks are getting authenticate changes made.

Accountability: If the changes are being made and things that are fallen on the floor, how do we go back and clean things up.

Performance: For me is important because when I am designing my plans, I need to know how long I have to wait before the parent and everybody else will say they have made the swap of the key.

Finally, predictables. Predictability in operation is a really good things. Everybody knows it's coming and it comes on time.

When things aren't predictable we start getting suspicious that there may be some malicious activity out there or just a plain failure.

Now, the environments: This is when we get hung up on mail list discussions. There is a lot of way to set this up. Registrant to registry and so on. I want to say that we have all these environments out there. The slides that you have seen in the slide pack have different drawings which were me trying to attempt expanding what the problem space it. It's not just domain names. It's also the reverse maps. The LIRs and the reverse map information going back to RIPE and so on. All of this is, we have the same problem up and down the tree no matter how we have divided up the business models of how we put things together.

Ultimately what I am looking for now is I want to come up with requirements, because we need to have a standard to do this stuff. We need to have a good way to pass things back and forth that works across the board. When you are an operator of DNS and you are producing records you have to talk to many registries out there. If I am a registry out there you have to talk to many operators out there and in different ways. We operate registries according to a /TPHAPB model, CCT model other times. He have lots of different modes of operation, we want to solve this in a way that everyone can kind of come to some level of agreement between where do we put the interfaces, between the DNS operator and the registry model and so on.

Also what I didn't put in these slides here because initially I was going to include this, we also need to talk about the ISPs returning recursive name servers. I am interested in talking to people who are running ISP recursive servers and how they want to get involved with this. In addition to anyone who has an environment which may not be the ICANN shared registry model ?? that's the one I am familiar with ?? and find out where do we solve this and hopefully get people to think about this problem. People had talked to me about they hadn't talked about this part of the provisioning until the questions had come up here.

So with that, those are my slides.

CHAIR: Thank you. Any questions?

SHANE KERR: Shane Kerr. I don't disagree with anything you say, I think this is good and I think it's a good activity going forward. One thing that I do worry a little bit about is, do you think that any kind of standard process or technology can succeed without registrar buy in?

EDWARD LEWIS: The reason why we have this problem in the ICANN world is registrar to registrar interface, there is no standard there at all. So... in fact one of the slides I have there is we may have to push this data into the registrars they put it into the EPP and what they know. But the registrars, too are ?? the thing for me the registrar who are domain name operators and there is registrar for domain sellers, I don't touch that part of the market. I am not that familiar. That's why I want to hear from people who are in there. What do they want to see? Do they want to run a protocol? How do they want to handle this?

SHANE KERR: Okay. I guess I think it would be worthwhile going through the activity, even if it fails, because it can succeed in some subset of cases but I suspect that the registrars are going to say no, and basically make DNSSEC option very tricky.

EDWARD LEWIS: Maybe some will make it easier, some harder. We have talked to some registrars who are a lot more advanced in doing DNSSEC because they do a hosting thing and they have a different interpretation and they have an easier problem frankly, because they can go right to the registry.

AUDIENCE SPEAKER: Have you talked to I think Steve Crocker is working on the problem between transfers.

EDWARD LEWIS: Actually one of the slides I missed was related problems. Listening to a presentation they gave made me say we need to understand what the problem is, so...

PATRIK FELSTROM: Patrik Felstrom
Standing here as a registrar in.fc. One of the things that you should think about carefully regarding registrars is whether you envision the ?? well the holder of the key of the the child zone talking directly to the registry or the registrars, and that might be, that sort of business case implication is something which might be the biggest sort of reaction by the registrars.

The second thing is that there is in general a need for a protocol to communicate with the registrar, given that you are a DNS manager because you might want to do other changes than change the DS, is maybe the change that you also want to provision a zone or change the MS and what not. The question is whether you would like to go down the path of actually implementing the whole EPP baggage. Maybe you don't want to do that. I think if it is the case ?? so ?? but I do think that this discussion, the DNSSEC context might be able to move forward if you try to constrain it as a discussion on how should a DNS operator that is not a registrar pass the DS record to the registrar of the domain.

EDWARD LEWIS: That's one sub case.

AUDIENCE SPEAKER: That's a sub case but maybe it is the case that we ?? by also constraining the problem it might be easier to move forward also from a business case perspective to get a buy in on the whole idea.

EDWARD LEWIS: I think the reason why this question came up is that NS ?? when you change an NS record you can add one and keep going. With DNS record ?? that's why it's coming up now as opposed to before.

MATT LARSON: Matt Larson. I want to follow up mostly to something Shane said, this talk of special actions for transfers of DNSSEC enabled domains, that's not a registry in a registrar issue. That's an issue between two hosting providers. I know that sometimes the registrar is the hosting provider, but that does not mean that we need to encumber the registrar/registry relationship with that. A transfer is a transfer. And the DS record, the key material goes from one domain to the other, we are done.

LARS LYMAN: Lars Lyman. But that implies that we need to really clearly define the role of the DNS operator in this, all this series of relationships, and you might need to define protocols to deal with that matter from ?? deal with the DNS operator. It may not hit the registry. It may not hit the registrar, but there is probably room for improvement in that section.

EDWARD LEWIS: Certainly if I have someone at dotcom, I am not going to send it to VeriSign, I am going to send it to the registrar. If I have somebody in TLD that has a direct interface, I want to send it there. I know I am not going to send it right to customers for my customers who are come delegations.

CHAIR: I have got a final question just to close this off which I think it's going to be two questions. One is for yourself and perhaps one for the Working Group. The question is what do you think are the next steps here? I think you have identified something that needs to be tackled and it looks as if it doesn't fit comfortably into something that could be done in the testify because it it's in between two issues. So, probably needs some kind of forum to do this and maybe this is something maybe this Working Group might want to pick up on, but I'd like to know what you think may be the next steps you'd like to see and maybe if the Working Group thinks yes, this is a work item we could pick up and do something.

EDWARD LEWIS: I picked this venue to present this because the IETF is about protocols and solutions, not about requirements so much. This also is the kind of thing that if I were to pick between IETF and ICANN, I'd go to ICANN, but this isn't an ICANN only problem. I want to each out to people that have other operations model. ICANN model is one part of this. We need to get more voices. And also this may actually touch on the recursive server side, which is the ISP that is may be coming to RIPE and have another angle on DNSSEC completely. They need to pull this data out and provision it in that recursive servers. This is the venue where ?? I think the broader spectrum, at least in this European area, you have a better perspective on how you run this entire system and we all have unique ways of doing this. We have SE has the experience of running this for sometime. That's why ?? I think handling this issue in this group is a good place to go. I might go to someplace in Asia, APTLD and look at theirs. That's only registries. I like the effect that we have operators in the room and registries. We have everything. This is a good venue for that. We have a good cross discussion here.

So the next thing is does the Working Group ??

CHAIR: Is this something that the Working Group is minded that we could maybe pick up and do something with?

My feeling maybe is that the way forward is if we want to do something here, is maybe to have he'd produce a paper that we could then start chewing and and refinding and working on with a view to creating an action item. I think it's too early to be talking about action items myself personally. This is something we might want to take up. Peter?

PETER KOCH: First I'd say even though I was out of the room most of the time I have read the slides before so I partly know what I am talking about hopefully. I have a slight concern that we are discussing this topic in too many fora already. Having more discussion on the mailing list is fine, but I personally don't see and probably no hats in this case, I don't see anything that the Working Group could do better than other fora. So we have various IETF venues and, well you are in a bad shape when you prefer ICANN over IETF? But we have the DNSSEC deployment Working Group, we have other things. I don't really see what this Working Group could add to that from that perspective. What's needed here is operators and registrars, and if we do have them in the room, then yeah, please speak up, and take up the work and share your concerns.

CHAIR: No other questions or comments on this? I think what we'll do is leave it at the moment. I think have some more discussion on the list and see if something materialises out of the list, that could be something either for the Working Group to do or to pass on to one of these other fora that Peter was mentioning.

EDWARD LEWIS: If anybody wants to send to me privately, that's fine. I am trying to collect data you want to give me on this.

CHAIR: Okay. Thanks very much.

Okay. Next slot is set aside for follow ups to EOF discussions and I have tried to split this into two in the agenda for the Working Group. We would like to have discussion in this closing slot before the lunch break around the root signing initiative that was outlined in the plenary on Tuesday, and also any follow?up questions to the DNSSEC deployment plans that Sara discussed for .pt.

So Matt I know has got a couple of slides ready for just going into the a bit more of the technical detail about the root signing plans but before we get on to that I'd like to say, has anybody got any follow?up questions on plans for signing .pt or any other issues surrounding that?

Sara, do you have anything to say at this point?

We have got a few minutes for slide presentation of a sort, but this is really just to spark discussion, and hopefully we can have discussion up until we get to lunch break about it.

JOE ABLEY: I am going to skip through these quickly. On Tuesday we mentioned an overview. The idea was to have a bit more a closed session where people could ask questions. So what we are trying to could is deploy a signed root with all these good attributes. So the problems that we think we have, the issues that we are trying to accommodate when we deploy a signed root.

First of all, I think we that we don't know how many we think a reasonably large number. A number large enough to not be able to ignore, of clients set DoE equals 1 even when they have no intention of validating. That's fine when they received a small inside response. When they received a large signed response it's possible that the network between that DNS client and the server will not accommodate the response and that could be because of UDP fragmentation issues, TCP transport availability. It could be because of firewalls or other middleware that are not expecting big responses.

If we suddenly start sending big responses to haul those clients currently with no changes are receiving smaller responses, it's possible those clients will suddenly stop being able to see name servers and we think that's probably bad.

The other issue that we are trying to accommodate is that if there are some cats free midway through the roll out of DNSSEC, which requires us to roll back, and if that catastrophe happens far enough down the line that a validated community has appeared earlier adopters that have decided to validate from the root down, that looks like a man in the middle attack and the DNS goes dark. By unsigning to protect the people who are not ready for DNS we are hurting the people who are ready. That has bad connotations for future deployment.

So, the two aspects to our proposal here. First of all, we want to deploy incrementally. We are talking about deliberate incoherence in the structure of the zone. Not the contents of the zone. We want to serve a signed zone from an increasing number of servers but not all the servers, gradually increasing to be all the servers. So to start with we serve a signed zone from just L root, which is the route server operator for ICANN. Follow up with J root. Other root servers would then pick up the signed zone and we expect the last one we would do would be A. And the time line for this would be months.

So the idea here is that we have name servers which have problems receiving a larger response, but which are able to shift and reuse a different name server if they can't reach the one they tried to start it. Then those name servers are protected in the sense that they have still have some name servers they can use to get a small response that's small enough for them to receive. And we leave A till last because we think there is some population of clients in the world which do a linear search or which always assume that A is reachable. So that amount of damage we'll leave till last.

The second important aspect that when we sign the zone, we will do it in such a way that cannot validate the records in it. So, the idea of doing this is we prevent a community of early adopting validators from forming simply because the data in the zone doesn't support validation. The key we put in the DNS key set is a key not used to sign anything. The key that's used to sign things will not be published anywhere. So there is no Trust Anchor available for this. There is no validation possible. Here is an example of the bogus key we might nut in the DNS key set at the apex. There is actually a valid key but it contains some text but it guides people as to why they shouldn't use it.

So, again the goals here are to deploy conservatively and to prevent a community of validators from forming.

Doing all this stuff is very nice, but unless we measure is there is no point in doing it. Two measurements is what we want to do. One is actual examination of the packets on those route name servers that are well instrumented. Trends in traffic changes, see if we can find communities of clients shifting from one server to another in response to that server serving a signed zone. We expect this to be a difficult problem because the root servers receive a lot of crap, so it's difficult to see what's good.

The other important part is keep talk tolling operators and let them know these changes are happening continued to to try and receive feedback through mail lists and meetings as to what impact in the real world these actions at the root are having.

So, before we start any of this of course, we would make sure that the proposal we have doesn't fundamentally break any well known implementation resolver. In terms of way it's signed. And we are here basically to get feedback on this. This is a proposal at this stage. We are proposing that this would start as of December the 1st. It would continue words to full deployment next year. It's coming fairly soon. If people have opinions about whether this is a good idea or a bad idea, I'd like to hear them. And if we have additional time for questions, if people have other questions from Tuesday to do with parameters used to sign the root or anything else, you can also ask them here or approach Matt or me directly.

CHAIR: Okay then. Time for questions and comments. Who would like to take to the mike first?

AUDIENCE SPEAKER: Edward Luas. How do you see them in time? The incoherencey would be good before you give out the real keys I would think? Do you think you might give out the real keys before you get to all the servers or have you thought about the timing of the two?

JOE ABLEY: I think we envisioned this is at the end of the this whole process, one of the signed zone has been rolled out everywhere, we would execute the first production KSK roll. We don't flush the Trust Achor until all the auditors and the security validation has happened. There is no key to hand out until that KSK roll has happened.

AUDIENCE SPEAKER: Shane Kerr: I have a question about the physical locations where L and J are served from. Are they both Anycast around the world?

You are starting off with L server, you may have different impacts depending on where the servers are physically located, so it looks from the slide that you starting with L and then going to J and everyone else but A. I want to make sure that you are actually covering the same regions of the world when you go from that step after J to everyone but A.

SPEAKER: I don't see the connection with Anycast. I mean fundamentally it's an IP address of a route server that's providing service.

AUDIENCE SPEAKER: But clients tend to go to the closer route server, so I think ?? I guess what I would propose is that is to try to get, as a step, get the root servers that are widely distributed geographically as a kind of a next level of things that could possibly break.

AUDIENCE SPEAKER: Olaf. This sort of hooks into what Shane mentioned, the order in which you'd deploy all this, because I think that order has to do with what kind of problems you are anticipating and what measurement you are using to try and observe if those problems actually appear. It might be ?? and this is something that I think is the heart of the my comments, last question, there might be things that are very hard to observe, problems that are there but very hard to observe because traffic patterns shift, the traffic patterns are in the noise of actually what is happening in root servers, so on and so forth, so I'd be interested in understanding what kind of things you will be looking for and what kind of methodology you will be using for observing those shifts, and whether there are any plans to share those methodologies and possible problems you are looking for with the broad community?

MATT LARSON: Okay, I'll start from the last part of your question first. The intent here is that all documentation, all designs about the system are going to be made public, and that will include the testing plan and the deployment plan. So, not only made public, but made public and we are soliciting feedback.

I agree that measuring is a real problem for all the reasons you have said. It's difficult because it's finding a needle in a hay stack. One of the things we are going to do in addition to looking at the obvious stuff like rock query load and the pattern of source IPs, the query given surfer. Someone suggested that we could look at the number of queries that come in with EDS series buffer side of 512. If that number goes up that could indicate people falling back because they were unable to get large responses. The other thing that we are relying on, as Joe said in the slide, is community feedback. What we really don't say in these slides that the communication plan that goes along with this is going to be widely published. We want everybody to know that this is going to be going on and we are going to have multiple feedback channels. And the intent is to have there be sometime while ?? interval between when the last server is serving the invalidatable zone until we publish the real Trust Anchor. So there will be a window when all the servers are in this mode. So if there is somebody who has trouble, there is no place they can go. That's sort of the time of last resort to accept feedback and find out from people if there are problems.

AUDIENCE SPEAKER: Daniel Karrenberg. I think this is a neat idea, just to start on a positive side, because nobody has said this yet.


I also think, and it may sound somewhat redundant because of what you just last said, but I stood up earlier. I think the crux of this whole thing is the communications. I think, I would be very worried if it wasn't, you know, if there wasn't a strong unified communication plan between all the actors in this area that projects exactly what's going to happen, projects what the motivation is, and that it is not sort of arbitrary playing with the system. And C would project very clearly that we are all acting in a coordinated way and that we are all behind this and that it's all well organised. And I think that is about 95% of the effort. And I would ?? I see both of you standing there with ICANN and VeriSign hats on and I appreciate you taking the lead on this, but I think you have to have everyone in this space be part of that communications plan and have a unified message going out. So that this can not be spun as something that is bad or destabilises the system and so on. So I think the ducks you have to line up are not just the technical execution plan, the measurement plan which is very important because you have to take measurements from various places. I mean we have structures for that but the important thing is to have the communications plan lined up. Those ducks are the most important ducks in this one.

AUDIENCE SPEAKER: Andre. I guess my question is toward what Olaf asked. In particular understanding what kind of can break in this communication between root servers and the client from the client percent. Because I appreciate that we as the operators will look after what happened with the traffic patterns and how this whole thing behaves. But it's very difficult to detect problems on the client side. You may experience delays or something like that. But I doubt that people will go deep and analyse.

So my question here: How can we aid the clients so they can test whether they have having problems and get them more explicit than just by you know shifting to other letters that are not signed yet?

JOE ABLEY: I think that's a good question. I don't know that we have a particularly clever answer for it right now. The clients we are talking about are the validators that are run by the ISPs. So far as the audience nor this seems to be the ISP communities, the people who are running the help desks, people who have things to check when clients complain that resolvers don't work. There are big operators that we can probably pay special attention to, open DNS and such people and we can try and talk to them and fight out what their experience is as a key example of a well used resolver. As far as giving clients better tools, getting communications is the most immediate path. If people have more idea about specific kinds of tests. I know people are working on client on entered tests for path TO and such things. I think that's great and continuing to hear feedback from those people is important. But again that seems more like a communications channel issue for us at the moment.

AUDIENCE SPEAKER: I guess my suggestion was along with the communication campaign, give people real tools so they can check whether it's their firewall or they might have fragmentation on the path or something else related to actual data that we are with serving.

JOE ABLEY: It's a good idea.

CHAIR: We we are coming up to the close of our allotted slot and I think we should have continued discussion as much as we possibly can but let's not each too much into the time so could I ask the remaining speakers to please try to be as brief and as direct with their comments and questions so we can fit everybody in. Neill.

AUDIENCE SPEAKER: Neill O'Reilly, UCD. Part of what I wanted to ask was already asked by Andre. But I think there are still a few gaps here.

On the one hand, it's not just about tools. But the question is: The outward communications I think has been very clearly identified as needing to be done, but the problem is the feedback channel. How is, for example, the help desk in my university or the ISP that delivers service back to home going to be not just able to communicate back any problems, but seriously brought into the channel? Because when I am hearing ?? it's all very well to hear people with problems should let us know. But there has to be a much more positive out reach than that, and it's a huge challenge and I am not sure how ?? I am not sure if there are any good answers for that. But if you have some, it would be good to hear them.

JOE ABLEY: When we have some, I'll let you know.

MATT LARSON: We do have some ideas.

AUDIENCE SPEAKER: Sara from FCCN. I just have two questions.

One, I didn't notice if you have already unused NSEC or NS ?? NY?

The other is as a .pt, I think someone already asked at plenary but I really would like to know, is .pt, if I want to submit my DS, do you have any kind of validation or digital certifications, something that will guarantee that is really the dot PT it in the DS record?

JOE ABLEY: So, for the first question, it would be NSEC, not NSEC 3. The root zone is a public zone. We don't have any need to enforce privacy in the root zone, so it would be NSEC.

As to the second question, that's really an IANA question. I don't work with the IANA group at ICANN, so I can't give you a very good answer at this stage. But the existing channels that I used to authenticate requests to change entries in the root zone, for instance if you change name servers or re?number them, those procedures will be extended to accept DS, to be used as Trust Anchors in the root zone. And the precise mechanisms haven't been fully defined yet. That's an ongoing work item for the IANA. Once those have been received and those get authorised as part of the normal chain of deployment of data for the root zone from IANA to NTIA to VeriSign.

AUDIENCE SPEAKER: But it's being developed as part of the ITAR thing. But if he is interested in that, that's where she should go.

JOE ABLEY: As far as the overlap between the root zone and the ITAR goes, there is no desire at this stage, at least nobody has expressed desire to the IANA to have any records automatically moved from the ITAR to the root zone. It would require an explicit request. Nothing automatic will happen, it will be driven by the TLDs.

AUDIENCE SPEAKER: I have two comments.

Comment number one is that you said that receiving larger responses may cause problems for certain resolvers because for one reason for another they will not be able to receive it, be it broken middleware or whatever. I think it's useful to clarify a bit that we actually have a fair in the of DNSSEC already in the system here. So, those people with that general problem, will probably have certain issues with reaching a whole bunch of TLDs including .org right now. So I don't think that's really a major issue. It's a concern but not a major issue, because it has mostly been, if not flushed out, then at least already noticed.

The really crucial difference here is the priming query. The priming inquiry is the only thing that is different when signing the root when signing all sorts of other zones. So, that's worth keeping in mind somewhere. That's the real difference.

Then on the topic of stats and measurements and seeing if there are troubles. Well, I absolutely have concerns here. I see that noise daily. I spent weeks and months trying to understand the noise. I have given up mostly the and it's really, really hard to analyse route server traffic even if you have many of them. It's not easier by having all of them because then we get into data collation issues that are hard by themselves. So, I believe there will be a certain problems with actually seeing stuff in this. And then we have the additional problem being that when you ask for input from the community when there are problems, the real problem here is that the community, as in us, are not the ones who will have problems. The people who should tell us something are the guys we have never heard T the problem with the guys we have never heard of is if they have a problem with this, for whatever reason, in spite of all the announcements, DNS is not working as it should, the most typical response on the edge of the networks to restart the name server. And this unfortunately sort of further messes up the stats, because the assumption here is that resolver that has a problem trying to prime against a signed letter would go to the next one which may be signed, will go to the next one, and eventually find an unsigned one and it's fine. The problem is that when you restart the name server or reboot the machine or whatever, you lose all context and you lose all history, so it will just start out from the same vantage point again doing the same broken thing over again. And this even further makes the stats harder to interpret.

AUDIENCE SPEAKER: May I respond to this? Olaf: What you just explained was at the basis of the question that I asked earlier. I believe that if you try to assess what kind of things can go wrong and what you just mentioned is the thing that can go wrong, people restarting their name service, I think that you could come up with a specific footprint of that behaviour. Then data mining for that footprint might be much easier to do sampling or whatever, and my suggestion would be try to come up with scenarios of things that can go wrong, things that you just mentioned is only just one scenario, and try to see what kind of footprint you would see in the data to collect a set of name servers and see if you can tailor your measurement and instrument it words to that. That is sort of the background of the question that I had. You know, is there an inventory of things to look for essentially in that data set?

CHAIR: Could I just jump in here for a second Olaf?

Is that actually this could also be useful for this communication out reach activity. These are the sorts of things we expect could go wrong, so if you as an operator find yourself restarting your name servers much more often that you would expect to do, here is is what it might be...

AUDIENCE SPEAKER: I basically agree with what Olaf said. The point is this is all wrong we shouldn't do T absolutely not. Of course we should do it. I am just concerned that the whole point with the staged phase inis that we will be able to draw conclusions from stats, and if we are unable to draw conclusions from stats, that added complexity is sort of pointless. So we really, really need to sit down and figure out what Olaf said. Fingerprints of what will be useful information in advance, because we must have that beforehand.

JOE ABLEY: I think there is also the point we expect to hear about things come up that we didn't expect. If we want to give us time to examine those foot prints. We need the deployment schedule not to be too short.

AUDIENCE SPEAKER: Neill O'Reilly. I think there is an important five letter word that I haven't heard in the room yet, and ?? before lunch ?? what I mean is the five?letter word "Press" we have seen in the business pages and technology pages of our better newspapers in our small country already articles about IPv6. This thing has the same kind of disastrous impact possible. The message needs to be gotten out there beyond the techie community in a big way. That's it.

CHAIR: Last comment from has been and I need to make one or two final comments myself.

AUDIENCE SPEAKER: I have another five letter word. Thanks. We all want to see this happen.

CHAIR: Thank you, Joe.

That brings me back to the thing I would like to close this discussion with. We have begin Joe and mat some ambition and feedback which will help them with the plans for the future role of the signed root.

You may remember back at the /TALen meeting that we set the ball rolling here by issuing the statement that please sign the root and get this done. So I think it would be rude of us not to make some kind of response from this Working Group or from the RIPE community to say thank you. And to say that we support those efforts. I assume you all do support those efforts. But I do think we should have some kind of response to that. So can I have signs from people in the room to this this is a good idea to coming from in Working Group or the communication to say thank you for doing this and we look forward to seeing a signed root next year.


CHAIR: Thank you. What I'll try to do is put words together and present them to the Working Group in the afternoon session just to see if we can get some kind of acceptance that that's the right way forward. I'd like to thank Joe and Matt for coming here and giving their presentations and the other speakers for this morning. Thank you to the scribes, the people doing the transcription, the jabbering. Thank you all very much and I'll see you back here at 2.

(Lunch break)