Connect with RIPE 59 : Facebook Twitter dopplr del.icio.us RSS linkedin

These are unedited transcripts and may contain errors.



The session on test traffic commenced on the 7 it October 2009 at 2 p.m.: .

CHAIR: Good afternoon everybody. Welcome to the test traffic Working Group. First of all, we have got the sort of agenda bashing to go through. So the minutes were posted to the list back in June, July time. Anybody got any issues with the minutes from the last meeting? Okay.

I'd like to thank Rena for being the scribe and the Jabber monitor. The agenda that we have got is Eric is going to give a very short update on the Test Traffic measurement network. Then we are going to have a double hander explanation on Netsense from Franz which was an expansion of what we talked about in the Plenary Session. When they have get a talk from on through?put metrics networks. Is he not here? I think this is a second attempt to give after the remote presentation failed in Amsterdam.

Then we have I believe it's Amele? You are going to give a presentation of Rob Smets. And Rob is available on Jabber, I believe, if there is any questions. And that's going to be followed by a discussion of the Working Group charter. It says Mark Drans on the charter but it's going to be given by me with input from the floor. So I think Eric?

CHAIR: One I think I forgot to mention is if you want to add comments or ask questions, please make sure you announce your affiliation into the microphones. Thank you.

ERIC: Okay. Good afternoon everyone. My name is Eric. I work in the engineering services as a software engineer. I am doing update, the slides by Ruben who did this update before.

The current status of the grid is that we have 110 enlisted boxes, in that they are assigned a number. 76% of that is currently taking measurements, that used to be 74, so we are slowing improving. We have done a number of installs since last time. And those are mostly outside of our ?? outside of Europe where we have most of our boxes. We have managed to install boxes in Kenya, South Africa, Uraguay, Pakistan and Hong Kong. The last two with help from APNIC who are running a deployment project for us. And recently online in had he ran in Iran.

We have also replaced two with new antenna and taking measurements again we replaced a box in Aida because of our backup network being located there.

No boxes have been decommissioned. We are seeing growth. We are running a number of projects nowadays with supports of local parties to deploy groups of TTM boxes. In Brazil most of these have been deployed by now, there is still one to go. APNIC has been deploying various of which you saw the Hong Kong box on the previous slide as well. We are currently working on a new deployment set in Russia. And also APNIC is providing us support in new deployments.

Pending installs, as in we planned it but it doesn't run yet is India, we are currently waiting on paperwork. Two new boxes for the DeCIX as well. A new box in Bangladesh and also Taiwan.

Looking at the status since the previous meeting, we see obviously a growth in the number of operational boxes, and the rest is fairly stable because we have done no decommissioning.

We also have the TT host at ripe.net mailing list which we have had for awhile. We are going to change the usage of that slightly. We are on that list, we are now going to subscribe all current phoneers and all new owners, you can unsubscribe if you don't want it. What we are going to do there is announce new deployments of all TTM boxes and it's supposed to serve as a platform for owners. We sometimes see that owners want to communicate with each other, but we can't just give out owner data to everyone. That's sometimes difficult. So this is like come and experience us, and opinions on what you do with TTM.

So, that was it already. Are there any questions?

CHAIR: Any questions for Eric? Thank you very much. So next is Franz I believe. Franz is going to start with a description of what Netsense is and ?? I believe, is going to do the live demo.

FRANZ: Hi everybody. I am from the RIPE NCC. Some of you might have seen me yesterday already talking about Netsense in the Plenary and might ask why another presentation about Netsense.

Well, in the Plenary, we tried to keep it very general. Here we would like to show you a little bit more about the technical details, how we came to the product as it is today, and what the detailed backgrounds are about it. But to remind you, or maybe some of you haven't been in the Plenary yesterday, I also like to give you a little bit of history and background on Netsense.

So, it all started with Hostcount in 1990, it was actually done by the RIPE community, not by the RIPE NCC. In 1999, the RIPE NCC started the routing information service. It's a passive measurement service where we collect BGP updates around the world. In 2000, we officially launched the TTM service. It was around before that already as a project, but the official launch was in 2000, and like Eric said in his presentation, we are now very happy to now have text?boxes on every continent except for the Antartica. If anybody has an Internet connection in the Antarctica, please talk to me.

And in 2004, we started DNSMON service which came out of a need of route server operators to monitor their DNS instances. And finally, in 2006, the RIPE NCC made an effort to combine all those services into one department, and to work together on it. And that was information services. And over the past years, we made a lot of improvements to these services but recently we found out that we are still kind of missing the wow factor of all of our tools.

And in order to get there, we asked for some outside help, which is a company called SunnyDale, they are based in Amsterdam, and they represented us catalogue our entire portfolio of tools and services, which were more than 30. We tried to categorise all of this into two major axes. The one on the top is from operational to strategical. Monitor is you want to be alarmed about some issues that happen in your network. Diagnose, you want to look at the issues afterwards. Forecast, you want to see how are things developing in the future.

And the second axis we have ?? what we actually have, it's our data sets, tools and then the analysis services on top. We figured out that those two categories on the top here are the most important one that we have to focus on a lot.

We also looked at our users, you guys, we found out that there are basically seven major categories we also tried to put you guys in that grid. And here are the two very important ones again. We decided with Netsense to focus on the two in the middle because there is a lot of overlap. We have in here ISP decision makers, engineers, you guys, but also media and governments.

We also talked to our stakeholders at the last RIPE meeting, which is a big thing that we are trying to do, we really, are the want to know what you guys want from this new thing we got very, very good feedback from you. Very big points were: We need more user friendly tools. We need to present our information better, we need to provide global over views of our data and have kind of drill down approaches. So you see the global overview and you want to dive deeper into the information. And all of you who participated, or who want to participate in the future by giving us feedback, thank you very much.

That whole thing, specially the stakeholder part, your feedback, didn't fit so well in our old development model though, which was the traditional waterfall model. So, we were looking for something new, and our big requirements here were: To get continuous feedback from our users, which will result in continuously changing requirements, we really really want to react to those changes very quickly and efficiently. And the other thing that we wanted is to have production ready software as early as possible so that we can show you this thing, listen to you, what do you guys want, and change it accordingly. And how we manage to do all of that, how we got that all together, I would like my colleague Vasco to explain it to you a little bit more and also show you Netsense in a little bit more detail. Thank you.

VASCO ASTURIANO: Hello everyone, my name is Vasco Asturiano. I work also in the information services in RIPE NCC and I am going to tell you a bit more in depth about the technicalities behind Netsense and show you a little demonstration of its features.

So, first, as my colleague Franz was saying, just a few words about how we approach this project. We wanted something to give us a rather quick and progressive results with changing requirements. So we decided to move away from the traditional waterfall project management that we have been using before, and went for a more dynamic approach that would have a faster response to, faster and easier response to obstacles and changes. So, we settled for a Scrum, and that doesn't mean that we started all playing rugby. Scrum is a project management methodology focused on continuous improvement and interactive development. So, we used agile mindset techniques for the software development. How this works is we first build a backlog of prioritised user stories, and these act as requirements and then the Scrum team has a freedom of choice on how to implement those user stories, and also as I say, on how much complexity is each one, etc.. so, the whole project gains a more dynamic view.

We worked in two?week iterations which, in Scrum terminology, they are called sprints, and at the end of each sprint, we do an internal presentation of the software deliverables that we have promised to do for that sprint. And after we do this bi?weekly demo, the team has a feedback and retrospective session we focus on what can we further improve our future sprints, and things like that.

So, we started this whole effort in the second half of July this year, so it was quite recent. And we had five people working in our team, in the Scrum team.

Since then, we have done six and a half sprints. I say half, because we are actually still in the middle of one which will finish this Friday. And we have completed 51 user stories. And this amounted to a whole 331 complexity points. We use things like this, like a stickie board to maintain the state of the tasks and user stories within one sprint. And for each sprint, we also do a burn?down chart, which this basically represents complexity points per days of the sprint, and the red line is the ideal, and the black line is what actually happens. So the top one was for the first sprint we ever did and you can see we were kind of under the planning, and the one below was for a latest sprint that we were move above the planning. But we were able to finish most of the user stories and tasks for all the sprints. So nothing went terribly wrong.

When we sat down to design Netsense, the main focus was on building a rich yet clear user interface, so we wanted things like having simple colours and intuitive and pleasant navigation, we also wanted something that gave us flexibility of development and would be easily extendable. So we could focus on adding functionality later without much trouble.

So, for the initial release, the functionality that we focused on was just on giving a simple overview to network operators of first of all, the current global Internet state, and also an analysis of their own network. So, these are the features that we decided to include in the first release. We wanted to represent the state of global connectivity in terms of packet loss. This was taken from a TTM data, and is basically what every user sees if it comes to our page. We wanted an interactive AS dashboard, which would list all the prefixes originated by that DS and that you could also click around and see some changes in the visibility, etc.. so we wanted also to display the visibility of an AS or prefix, geographically around the globe. And this is basically done for ?? with data taken from RIS, which is BGP data collected from several exchange points around the globe. We wanted to have a routing registry consistency which basically compares data that's in the routing registry in the RIPE database with the real BGP data and tries to find inconsistencies, etc.. after we have settled for a design, we actually outsourced the style into to a design company so they could come up with a specific styling code.

So, based on all the assumptions that we had, this is the initial wire frame that we did. All that I have talked about is more or less there. And after making all of this functional, what we ended up with was this: So, this is actually what you see if you go to Netsense at the moment. All the elements are more or less there. They just look better. So...

What technologies we used to do this? At first we considered using a portal system which would act as a plug?in container ,but eventually decided to go for a more self?contained application environment that would give us more flexibility and freedom for interaction between the different plug?ins. So we settled for Wicket, which is a component?based application framework from Apache. The different companies were built in Java . We used Ajax to load the data from the server to the website, so we would have dynamic loading of the results, and independent plug?ins. And whenever we needed to blot geographical relevant data, we used Google maps, API. These were the main technology elements we used.

So now I'll show you a little demonstration of using Netsense. I hope it all goes well.

(Demo)

So if we navigate to Netsense.ripe.net. This is the first entry point that we see. Basically this map here is using live data from the last 24 hours. For each continent, we have a selected set of text?boxes that are sending continuous packets to the other boxes in the other continents. So, from this we can measure outbound packet loss from each of the continents to the other ones. You can ?? I'll just scroll over any of these and you will see it shifts automatically, so everything is 100 percent, which means the last 24 hours, there was no packet loss in our measurement.

And then if you want to go into your own network, let's try just put a prefix here, you can see that this is actually doing a validation of the prefix, so if you put something that's slightly invalid, it will eventually not let you do it. So it seems that I got blocked... I don't know, is there something someone can do to let me through?
If I put something like this, then doesn't actually let me go forward, because it's not a valid site.

So, if we get through, then we see all the companies start loading data through Ajax, you see this one is still loading here, it finished. What we see is, here on the left side, we see ?? we first find out that the prefix was actually announced from 3333, and then we list all the prefixes that are announced also from that AS. And from here, whatever we do here will propagate to the part on the right. So, in this case we don't have a lot of prefixes, but if we would, this would paginate we could actually sort by the last time we saw an update for this prefix, etc. So here on this side, we see the visibility of the prefix itself. So, we can zoom in for a continent, for example here, we can see all the exchange points that are present we see what's the ratio of full table peers versus the ones that do see the prefix. In this case it's all 100%. And down here we see the routing consistency, as I said before. So we do see the routing BGP and there is a route object present in the RIPE database, so it's all, it all seems to be okay.

We can also run this for an AS number. So let's try actually the same. In this case we get, again, the same list of prefixes, but none of them is selected, so we actually see the visibility for the whole we see that in some places it's less than 100%. This is probably due to this prefix here, which doesn't have full visibility. You see if you click on it, you actually see the map updating and you see there, it's 83%. And down here on the routing consistency, we check all the route objects for this AS and also what's in the BGP. So we actually see that for the v6 prefix that we see in BGP. There is no route object present in the database.

We can also see things like the import and export peers as they are mentioned in the autnome objects in the RIPE database we find some inconsistencies here. You can filter to show ?? to filter out the things that are okay to show you all these things. They are not okay.

So in this case we see in the RIPE database, there are these import peers mentioned, but for a reason or another, we don't see this in BGP. That doesn't mean they don't exist. It might mean that we just do not see the AS path that include this left neighbour and you can try to find inconsistencies the other way round. So the ones that are not present in the database ?? okay, we have some also. So, in this case we see some left neighbours of this AS are seen in BGP, but import a transcripts are not present in the RIPE database. So this gives you a quick overview of what might be happening with your network.

Okay. This is it for a quick overview of the tool.

So, from here on, we have a couple of things done in the horizon, a couple of features that we wanted to expand it with. For example, besides packet loss in the global representation, display also packet delay between the several regional areas, we want to display how is the route server reachability from each region? This is be taken from DNSMON data.

I want to be able to show not only for the whole globe, but to be able to zoom into a specific region or even perhaps a specific country, if that's more interesting to the user.

Display stats about the global routing table so we can see growth of prefixes throughout time, and also distribution of prefix sizes and aggregation levels and things like that. So this will all be probably in the main page. So what you see for the current state of the Internet.

Then we want to integrate Netsense with single sign?on so they can link it with other systems and make the navigation transparent to users. And probably do things like, be able to create alarms from what the query is at the moment, and some other ideas that might come.

Besides BGP visibility, we want to plug also BGP stability in geographical terms, so we see if a prefix has sent many updates or is actually stable in the last day or so.

We'd probably want to do another plug?in that links with BGP Viz which shows the visualisation of the updates per time, so we can see ?? we can visualise, well, what's, how far the updates been.

So make a representation of the transits of an AS, so we basically do this calculation taking into account the AS paths that we see and make a percentage of what ?? which neighbours are more used in BGP.

Whenever we try to look something for a prefix, instead of showing only the exact match, we might want to show more or less specific prefixes so we can see if there is any overlapping routs or things like that.

For a prefix list for an AS, we might want to be able to travel back in the past, so we want to see how did my prefix list actually look yesterday or last week or last year. And to do a little statistical analysis on the prefix sizes.

And besides all of this miscellaneous ideas, of course we are open to feedback and greatly encourage, if you see something that you think, wow, I'd really like to have that there to display that information about my network, then please let us know, because we want to know. This is very valuable information for us.

And you can always reach us at Netsense at ripe.net and you can also, for this week, you can catch me in the coffee break area. I'll be actually there right after this session and tomorrow during the day. And feel free to use this. I believe it probably you can all access it now.

And that's it. Do you have any questions?

CHAIR: Any questions for Vasco or Franz? I'd just like to say, I think it looks fantastic. I think the RIPE NCC has always provided really good service and data but that's sort of a step ahead in the way that things should be presented. So thank you very much.

(Applause)

CHAIR: Next we have Timmo.

TIMMO: Good afternoon, I'd like to present to your attention this work, through?put metrics and packet delay in PCP negotiation. One year ago at RIPE 56 meeting in Berlin, we presented a method of measurement of our bandwise between two points in a global network. I would like to tell about two metrics in the TCP network. Two bandwidth metrics came in with the capacity and the other bandwidth. The capacity is ... pass. Note that the capacity does not depend on the traffic route. Available bandwidth in turn is a minimum sparing capacity that's not used by other traffic among other links.

On the figure 1, you can see the illustration for this metrics. It should be noted that the first time this metrics were presented in the work of...

I want to tell you about the model. So, in the question we want to represent the point?to?point delay. We refer to minimum past transit time for the given packet time. With a fiset delay ?? point?to?point of which packet to study we were able...on the figure 2, the dash shows the way near dependence between our average network and packet size. This angle concerning ?? is a variable bandwise instead of the next angle that means capacity.

Prolongation of the line gives us the intercept value, A. This intercept value is a minimum delay that you have a small package which transmitted in the network from one point to another.

The well?known express for through?put method describes in the dependence between the network delay and the package size is... so we can recognise this version for the case when paths consist of two or more hoops we get the simple way for estimation of through?put metrics including active bandwidth an capacity. So equation 3 shows that the way how to get the bandwidth. The new method suppose the ?? of packet side over the same paths for measurement in the through?put between two fixed points is recognised by two packets of different sizes then we get two different values. The equation 4, is a system for finding available bandwidth and equation 5 is equation for capacity.

On the figure 3 you can see illustration of this equation with two different packets.

So, after that when we found it this way of measurement, we would like to make, expert with RIPE TTM system, and the design the system meet all your requirements shown by our method. Namely words to the change the sides of a testing packet and to find network with a split hair accuracy. We should choose two different sizes of packages. The first sizes of package will be 15050 bytes as default. In the other case, it is reasonable to add testing bigger sizes of packages. We took 1024 byte, with frequency at 40 per minute. After that on this slide you can see the screen shows, demonstrates how we are making this experiment. It is necessary to resize the package and frequency of testing.

Testing results. By TelNet RIPE on port 9142. It's important to get data from both ends of the investigate channel simultaneously. In the case presented here, it's TT 01.RIPE net and TT 142 bot| RIPE net. Obtained data will contain required delay packages.

On this slide, we are going to the sending box and we will find the lines. Then the number in the line is a number of the packet. It's necessary for us to find this number on the receiving side of the channel. And the ?? less by one is the packet size. Then we are going to data from receiving box and should ?? and have to find the number of package that we are interested in. For set number of packets it's easy to find network delay. In our case it's 44,084 microseconds. The following number of packets. You can see it in the slide. Has the size of 100 bytes and the delay is 43,591 microseconds. So the difference is 493 microseconds.

So after that when we go to our delays, we have to compare the three of those, because the minimum of number of our experiments is are five pairs of delays for precisely experiment. In the last column, you can see the difference in average delays. In the other columns you can see the minimum difference in delays.

This experiment was made between point TT 142 and FF 01. So we sent a package from point TT 142 to TT 01. This also demonstrates the single situation, but from the other point to another.

Then the calculation of the bandwidth and capacity of the link, the first one is from 143 to 01 can be calculated as, we use these equations for ?? to get this. In the first one, available bandwidth is certain megabits approximately and capacity is about 14 megabits, so we see capacity more than available band bandwidth.

The band bandwidth and capacity can be calculated as the single way, so we have got also two equations, and have the results.

Also, we have developed a utility, a v? band. So, this is very simple. It works with ICMP protocol, and ale eyes this method, but it's not pre?precise. We don't have such accuracy, because it's around three time and not one way. So it's a problem.

The main problem of our work is a little number of ?? at this time we make only one experiment with RIPE TTM system that gives high precision of measurement and some local experiments with utility and with local network providers. Also, it will be great to include in the RIPE TTM system that makes statistics on packets possibility of calculation available bandwidth and capacity. It's until to compare the results received by our method with other utility path rate and path load. Unfortunately such tests are not made yesterday. We are looking for partners that can help us make additional measurements with RIPE TTM system and utility path rate and path load because we need to compare the results that get by RIPE TTM system with, for example, utility path rate.

Thank you for your attention. If you have any question, I can answer it.

(Applause)

CHAIR: Thank you very much Timmo, are there any questions?

Next we have Emile presenting on behalf of Rob Smets. This is a presentation on IPv6 deployment monitoring.

SPEAKER: Hello, my name is Emile Abin, I work at the RIPE NCC Science Group, but this is not my presentation. This is a presentation that originally was Rob Smets was going to do, but unfortunately he had a conflicting meeting. So I am just the messenger here, but a very willing messenger, because I think this is a really nice measurement study, or a proposed measurement study on IPv6 deployment monitoring. By TNO, which is a Dutch Research Institute, and GNKS consult, and Maarten who is also here in the room can answer any questions on the specific project.

So, this presentation is about v6 deployment measurements. First, I'll detail a little bit about already existing deployment metrics, there will be a breakdown of the measurements that are going to be done by, in this project into submeasurements and then a little bit about the specific methodology on one specific measurements that help from the community is needed on.

There is some considerations on privacy, commercial in confidence and retention and finally a cult participate.

So, why I think this presentation is, or this project is important is that this here provides the community a way to provide feedback for this, for these measurements.

So, first of all, the existing IPv6 deployment metrics out there. There is a couple of ways that you can go about looking at v6 deployment. Geoff Huston had a nice overview of them. They all have their pluses and minuses. For instance if you look at the RIR stats files, and look at the numbers there, you are only measuring intents to do something with v6, not real deployments. For instance, traffic volume measurements are hard to do for multiple locations, especially if you go across operator borders because it's very hard to share data, or to share that type of data. So, this project was European commissioned, and based on the 2008 European Union action plan, that had a specific goal that you're op should set itself, which was at least 25% of users should be able to connect to the v6 Internet and access their most important content and services without noticing a major difference as compared to v4. So that's ?? that should be reached by between, that's in 90 days. But, this study was commissioned to actually provide to you being in Commission with some measurement data ?? to provide some feedback on what's actually going on there and it's really user oriented so this project was ?? took their definition and put it out into three different submeasurements, as you can see in the underlining already.

So, the three parts are: They are unique users. That's the part that everybody has been focusing on I guess. So that's the fraction of v6 users over all users. And the address of this talk is going to be about that specific sub measurement. The two other parts are: Most important content and service providers, as Maarten already briefly touched on yesterday, that's looking by looking at the Alexa 500 for every country and see if there is a v6 version of a website available there. Another component is major difference in experience for a user and TNL developed a framework for that. It's called G1030. You can look it up. There is also a white paper on the whole methodology available on a website that I'll show later.

So, then you get ?? if you add it all up, you get the metric that is going to be measured is U times C times R. It's important to note that this is stricter than just looking at how many users are able to connect, because the other two factors also play into that. The nice thing is you can do this better EU state, EU aggregate it and look over this as a function of time, which is probably the most interesting thing, to see how this develops over time.

So, the measurement methodology that is going to be used is basically needs three parties to cooperate. First, there is a participating website, and that's what this presentation is about in part is getting people to participate in this, or figure out if there are any road blocks for people to participate in this. The idea ?? so to have a participating website that one has a little bit of Java script on their website, end users load that bit of Java script from the website, it gets executed and in the execution, this thing gets three objects from a test server and the thing there is that one object only has an, a record in the DNS, one has a quality record and the other one has both of these records. So you can actually see what the user behaviour or what the behaviour is for a specific user.

So, you get all that information on a test platform. This little piece of Java script also generates a unique ID so you can correlate these three and you store these results in a database for later processing.

The nice thing is that your website doesn't have to be v6 enabled. So if you are just ?? if you are only on v4, your end user will still Sister that little bit of Java script loaded over v4 and the measurement is really between the end user and the test platform. As some of you noticed this is very similar to what's already been done. There is a website that has similar measurements and Google has done similar measurements. But the nice thing about this measurement is that it is going to be hopefully scaled up to multiple websites participating and the results are being to be presented to the European Commission. So, hopefully measurement data from this will actually get onto the desk of decision makers there so they can actually see what the real current state is of things.

So, a little bit more detail on what data is collected. Of course the type of requests IP address and there is a session ID to correlate all of these things. There is a site code, a referral URL and a site name. These are used together to see if there is any tampering with the measurement. Some time steps and a browser type. And there is some sanity checks done on the data at the point where it gets in.

So, then there is a whole loop of things that happen periodically, and I am not going to go through all of the details here, but one interesting thing to note here is that, through a geolocation database, the 4 addresses are a maps to a country, which is nothing special but the nice thing is that with this you have a correlation bit between a v4 and a v6 address and throughout the correlation, you can actually get some geolocation information for v6. So, if somebody would run this on their website, what would they get? And this is an example of what's currently in this test platform and this is real data that was collected on the TNO website, which is ?? it's a scientific organisation, so you expect v6 deployment is a little higher than, in other places and what you see is the upper graph has v6 deployment on the Y axis, the X axis has countries and you see for instance in a country like Latvia that Rob highlighted here, you see a 5% deployment rate, which is quite a bit higher than numbers I have seen before that are all around like 1% or... but, he also put down some numbers on how many measurements he has currently, so it's only two v6 and 37 v4, so there is ?? you can put some question marks at the statistical significance of this. For instance, Sweden has 29 v6 addresses already and they're for the single site and they are at 6% deployment rate.

It's also nice to see, sort of differences between countries here.

And the lower graph shows details for four countries ?? I don't know is it visible? Upper one is the Netherlands, the one below is France, then UK, then Germany. On the far right?hand side, you also see for UK, you will see it's only 24 v6 addresses and that causes that graph to be bumpy because this is accumulated data up until a data point on the X axis. And of course other graphs here are possible, and I think there is people from this project would be very cooperative if people think of any nice other things that they would want to do with aggregate data on this.

So, of course, in every measurement you do you have measurement errors and you should be aware of what errors there are. In this, of course anything where a user does not map to single IP address all of the time causes problems. For instance, a proxies, the usual suspects, NAT and DHCP. These are things you can do a little about beforehand. But you can possibly correct afterwards if you have some measurement data. Things that cannot do something about are browsers without Java script, but the expectation is that is a minority. The other problem would be too little diversity in websites.

So you only get the scientists who are possibly much more v6 connected than the rest of the world. So, hence called to participate in this measurement effort.

Another problem could be that you have too few unique v6 addresses and Rob did a little graph of what upper and lower 95% confidence level would be for 0.1% v6 deployment percentage, and if you look at 10 v6 addresses, you see that the upper bond would be 0.2 and the lower bond would be 0 .05, so you have off by a factor too and for this type of measurement, that's still very useful, so you are in ?? you are at least in the good order of magnitude. So, are we still looking with a mike scope or not? You can answer that type of question.

Some other considerations hereof course are our privacy, commercial in confidence and retention. So ideally you would want to store as much data as possible, so later on you can correct if you find the DHCP effects or NAT or anything that you can detect and correct, and having all the data available, you can think of having a nice geolocation service come out of this, for instance. Whereas, you could also think of a time limiting all the information, the raw data you have in your database and then only gets, only get to aggregate statistics that you keep and ?? so you have no possible data problems there.

Another thing here is privacy, but the issues here are very similar to the global Internet advertising solutions, like the F senses or the double clicks. It's the same mechanism. You put some code on a website, and a third party gets to see some of your traffic. So, it's all about trust here, I think. So you need to have a good policy on how to deal with that, and that's where the commercial in confidence come in.

So of course web masters will only have access to data related to their own website. And there won't be any data disclosed to third parties at any time that would allow identification of persons or websites, that's the current policy that this project is working under.

If you want to be stricter, and you think of additional matters like anonymising v6 addresses or pats of v4 addresses. V6 addresses are already truncated, only the first 64 bits are capped. So there is a little bit of anonymising going on there already.

So, that being said and done, if people want to participate, this is the URL. The white paper I was talking about is also available from this paper. If you want to participate, go to the surprisingly participate link there, and there is a little story there on why you should participate, and what you get back from it. So all the cool graphs you can think of about v6 deployment of your customers, and you will also help provide useful data to the European Commission and if you don't want to participate, then it would be really useful if you could provide feedback on why you would not want to participate, and what would make ?? if there were changes that would make you be willing to participate in this.

So, that's it from me. Is there any questions? So everybody is participating?

CHAIR: We have got a couple of questions.

AUDIENCE SPEAKER: Daniel Karrenberg, RIPE NCC. I actually like this very much because we had some ideas at the RIPE NCC to do just those kinds of measurements and if the European Commission, apparently, is forking out money to do this, I consider it my tax euros at work and the RIPE NCC doesn't need to do these kinds of things. And apparently, judging from the presentation, they have even edge gauged some people who know that they are doing, which is even better.

I think my feedback immediately would be that the NCC would like to participate by putting this at least on some websites I am responsible for, but the one thing that I'd like to discuss with the TNO folks beforehand is, what kind of data they will publish from this? And I am not concerned about privacy at all, in the sense of you know what what my website gets doing or something like this. I am concerned about individual's privacy, but I am also concerned about sharing raw data so that other people can analyse it, maybe with a little bit different methods than this particular study does. And I am planning actually to go talk to them. Once that's done and the answers are acceptable and the publication policy actually looks, you know, acceptable, then I am ?? I will definitely see that this little Java script thing is put on the websites of the NCC that I am responsible for and I'll also go campaign, find other people doing it, because I don't think there is any significant concern to web master here, other than, that it may break their customers, but that's something you can very easily check against and it would also be in the interest of the TNO folks to not do that because that kills the study right there. So once question of a little bit more clarity about whether we can get at the raw data, I think he can wholeheartedly support this and ask our community to participate wholeheartedly. At the moment I am doing it a little bit reserved because I just don't know what kind of benefit in terms of raw data we get out of it.

AUDIENCE SPEAKER: Steve Padgett. One comment I had regarding like the graphs you were showing was that the, one thing that would probably be very beneficial, especially for like service owners, would be when you present a user with an A record and they are able to fetch it, but then when you present them with an A record and the AAAA record and they are not able to fetch it, they are obviously users that are broken. Services are not going to want to deploy both, A and AAAA until they can verify that the users aren't going to break when given that.

And then the second point you mentioned, the IPv6 as being aggregated to/64s, I know at least from personal experience, from ISPs, ISPs are typically hang out /60 or somewhere around there to their end customers, whereas on IPv4 space, you know, they may be presenced with one single IP, I have ?? depending on what land I am on, I could be coming from one of those, you might see better correlation depending on how you database

SPEAKER: I think that's, so your second part, I think if you have the measurement data, then you can sort of see where you sort of have to cut off things. And ?? so, to your first point, that sort of if you want to do sort of performance type things or see what is broken, that's as ?? not the specific focus of this study I think. There is like the IPv6 test.max.nl does something very similar to what you are talksing about.

AUDIENCE SPEAKER: I think you have got the data here, you could do that, right?

SPEAKER: Indeed.

DANIEL: I just want to understand what Steve was saying, because as far as I see this methodology, it's not dependent on AAAAs from the websites that participate, right, because it only needs a AAAA for the test platform which is there, right?

AUDIENCE SPEAKER: Yes, but if website that's part pating is considering moving to AAAA, right, they would want to make sure their users aren't broken which moving to that.

Daniel: Yes, okay. I wanted to prevent the misunderstanding that you have to be v6 enabled in order to participate in this thing.

MAARTEN BOTTERMAN: I want to comment particularly on the data retention policy how we deal with the data at the moment. The intention was not to release to to anybody outside of the study. We have a reputation to keep and obviously sharing that would be very damaging to the project.

However, the question came, like, would it be possible for research on that data, and the feeling was that we would want to keep it very limited. At the same time, if we would be able to do this in collaboration with RIPE NCC, in order also to be able to reach out further to its members, I think there is a case that we are certainly going to consider, and my feeling is, and I would just like to get your confirmation on that, that you would not see it as a break of our data, the care we deal with the data if you share it with the RIPE NCC for analysis. Can we raise hands, if that's not a problem, to share the data with RIPE NCC? Can raise your hands if you have a problem to share it with anybody else on the product team? There is just one person. We can talk later.

CHAIR: Okay? Thank you very much. Some mad y is selected. That's why there is strange stuff coming up.

CHAIR: I think we are running a little bit ahead of time which probably emphasises the point that I am trying to make with the talk about refocusing the test traffic Working Group. We have sent out a proposed new charter. I don't want to discuss text of the charter but I want to start the process of building consensus about what we are trying to do with the the Working Group.

Test traffic measurement service was conceived in 1997, and has been in full service since 1999. You will notice there is a bit of a conflict between the date that was Franz's slide before, but I think we are in the same sort of area. We have had, we had a couple of things to start with. Then the Working Group started itself over ten years ago. (BoF) I became chair about two years ago. At the moment we have been carrying on with the original charter, which was formed ?? the group was formed to discuss specifically the RIPE NCC TTM service. We did have scope in it to have other performance measuring techniques and devices, but it was pointed out subsequently that it's specifically not about comparing ISP performance for marketing purposes. We are not about saying you know somebody shouting look how good nigh network is compared to theirs. It's more about the use of data to show, to characterise networks.

Common agenda topics are always on the test traffic measurement operations and also on the developments such as the presentation that we had earlier about Netsense.

But what we also noticed is there has been a bit of a creep away from the original charter so, we have got people talking about other measurement projects. We usely have an update from the IETF on what's happening around traffic measurement there.

There has been talks about the other tools that have come out of the information services department that have used TTM, but also about things that are quite removed but are really about measurement.

So what we want to do now is effectively rationalise the Working Group, to give it a broader focus. Make sure that it's a forum that is available for the NCC, the RIPE community and beyond there, a global forum for people to discuss what's going on. To collaborate on the gathering of data, the tools that are used for gathering and the analysis of it and also to have that kind of scope that Franz was talking about before, where we got the operational side moving from monitoring through to diagnosis and then into the strategic idea of forecasting based on the measurements that have been taken. We want to make sure that it's still not just a marking tool for different networks so that people aren't use it go for pushing one ISP above another. The question is, do we have your support to do this? And if we have got this, then we can take it further and start working on that charter and move the group in that direction. Is there anybody who has got strong objections against what we are doing? I must say that the support on the list has been wholeheartedly into it, the response on the list has been wholeheartedly in support, for which we thank you.

If we are going to push forward with this, and it sounds like we have the support of the community to do it we are probably going to have to change the name to differentiate it away from the test traffic group. We have come with a couple of suggestions one is just an acronym of measurement analysis and tools. One suggested Internet measurement, considering it's my initials I am quite happy with. If one has got a better idea ??

So what I'd like to do now is does anybody have any questions, comments or any ideas for discussion of it? Or objections?

Dave.

AUDIENCE SPEAKER: Dave Wilson, HEAnet. You asked for support [d] HEAnet would like to give the support. We think this is a good idea and it's a good way to expand the group.

CHAIR: Thank you. Daniel?

AUDIENCE SPEAKER: Daniel: It's basically qualifying the status quo, we are already going in that direction. So what are you worrying about?

CHAIR: We wanted to make sure before we formalise that position, that there are no objections to it, but thank you, yes.

Thank you. The next steps on this, I have been asked by Rob to give a very quick update on this to the Closing Plenary on Friday, mainly to make sure that we aren't just speaking to the people who are already in the Working Group, because as we are trying to broaden the focus, we want to make sure we bring other people on board who aren't aware of this broadening of the scope.

We'll carry on the discussion on the mailing list about the proposed wording of the charter. And hopefully we'll be back in Prague in May of next year with a new broader focus Working Group. Thank you very much.

(Applause)

CHAIR: That concludes the Working Group for today. Thank you.