https://www.sipit.net/index.php?title=SIPit31_summary&feed=atom&action=historySIPit31 summary - Revision history2024-03-28T23:20:53ZRevision history for this page on the wikiMediaWiki 1.35.2https://www.sipit.net/index.php?title=SIPit31_summary&diff=1599&oldid=prevRjs: Created page with "<pre> SIPit 31 was hosted by ETSI in Nice, France the week of Sep 29 - Oct 3 , 2014. There were 47 attendees from 16 companies visiting from 9 countries. We had 21 distinct i..."2014-10-03T09:57:28Z<p>Created page with "<pre> SIPit 31 was hosted by ETSI in Nice, France the week of Sep 29 - Oct 3 , 2014. There were 47 attendees from 16 companies visiting from 9 countries. We had 21 distinct i..."</p>
<p><b>New page</b></p><div><pre><br />
SIPit 31 was hosted by ETSI in Nice, France<br />
the week of Sep 29 - Oct 3 , 2014.<br />
<br />
There were 47 attendees from 16 companies visiting from 9 countries.<br />
We had 21 distinct implementations. <br />
<br />
<br />
The roles represented (some implementations act in more than one role)<br />
18 endpoints <br />
3 proxy/registrars<br />
<br />
Implementations using each transport for SIP messages:<br />
UDP 100% <br />
TCP 86%<br />
TLS 81% (20% server-auth-only)<br />
SCTP 10%<br />
DTLS 5%<br />
<br />
67% of the implementations present supported IPv6.<br />
<br />
There were two RFC4474 Identity implementations present.<br />
<br />
For DNS we had support for:<br />
Full RFC3263 : 71% <br />
SRV only : 10%<br />
A/AAAA records only : 19%<br />
no DNS support : 0%<br />
<br />
<br />
Support for various items in the endpoints: <br />
72% replaces<br />
50% 5389stun<br />
50% turn<br />
44% ice<br />
44% path<br />
39% 3489stun<br />
33% sip/stun multiplexing<br />
28% outbound <br />
28% gruu<br />
17% diversion<br />
17% turn-tcp<br />
11% history-info (there were no implementations of 4244bis)<br />
11% join<br />
6% service-route<br />
<br />
Support for various items in the proxy/registrars:<br />
67% diversion<br />
67% outbound<br />
67% path<br />
67% sip/stun multiplexing<br />
67% gruu<br />
67% history-info<br />
0% service-route<br />
<br />
The endpoints and B2BUAs implemented these methods:<br />
100% INVITE, CANCEL, ACK, BYE <br />
94% REGISTER<br />
94% NOTIFY<br />
94% OPTIONS<br />
83% REFER<br />
83% INFO <br />
78% UPDATE<br />
67% PRACK <br />
67% SUBSCRIBE<br />
50% PUBLISH <br />
28% MESSAGE<br />
<br />
100% of the implementations sent RTP from the port advertised <br />
for reception (symmetric-rtp). <br />
one implementations required the other party to use symmetric-rtp.<br />
<br />
78% of the UAs present both sent RTCP and paid attention to RTCP they received.<br />
<br />
72% of the endpoints present supported SRTP using sdes.<br />
28% supported SRTP using dtls-srtp.<br />
<br />
78% of the endpoints supported multipart/MIME.<br />
There was one implementations present with S/MIME support. <br />
<br />
81% followed RFC4320 (corrections to the non-INVITE transaction)<br />
67% of the implementations present followed RFC6026 (corrections <br />
to the INVITE transaction)<br />
<br />
Not counting implementations that support events only for REFER:<br />
There were 11 SIP Event Server implementations <br />
There were 7 SIP Event Client implementations <br />
<br />
These event packages were supported:<br />
Server Client<br />
9 5 presence<br />
3 2 presence.winfo<br />
3 4 message-summary<br />
7 3 dialog<br />
1 1 reg<br />
2 2 conference<br />
1 1 kpml<br />
<br />
None of the implementations had explicitly updated to RFC 6665.<br />
<br />
All of the proxies present implemented RFC5393's fork-loop-fix,<br />
and two implemented max-breadth.<br />
<br />
Multiparty tests<br />
(Notes provided by Eoin Mcleod and Olle Johansson)<br />
<br />
This was a UA heavy event - far fewer proxy/SBC implementations than usual.<br />
<br />
We put up a simple bug counter application for attendees to increment the counter<br />
whenever they found a bug in their implementations. At the end of the week, the counter<br />
was around 150.<br />
<br />
We put pressure on TLS, IPv6, and SRTP testing, including each of those in<br />
every multiparty test. Participants were excited to exercise non-trivial tests<br />
while running dual-stack. The suite of automated self tests for TLS, DNS, and<br />
early media continues to expand and improve.<br />
<br />
During forking tests, we found several UAs that did not deal well with multiple<br />
200 OKs to an INVITE, essentially ignoring all but the first branch, leading to<br />
retransmission and eventual timeout of the other branches for lack of an ACK.<br />
Those UAs that dealt well with multiple 200 OKs did so by ACKing and sending an<br />
immediate BYE to all but the dialog they chose to keep.<br />
<br />
TLS focused testing showed that the use of 'transport=tls' in Route,<br />
Record-Route, and Contact header fields is still pervasive, if not universal.<br />
The arguments against it in RFC5630 (was draft-ietf-sip-sips) are not<br />
compelling to implementers and deployers. Several participants pointed to the<br />
inconsistency of those arguments with the tokens that appear in Via headers<br />
registered at<br />
<http://www.iana.org/assignments/sip-parameters/sip-parameters.xml#sip-transport>.<br />
SIPCORE should consider whether to re-instate transport=tls or provide stronger<br />
documentation for how to indicate TLS over TCP in those header fields when it<br />
is necessary to do so. <br />
<br />
TCP connection reuse has become generally well implemented.<br />
<br />
The participants chose to extend multiparty nat/firewall traversal testing<br />
across multiple days. We constructed a wide variety of network boundaries (see<br />
<https://www.sipit.net/images/d/da/800px-Sipit-evil-nat-design.001.png>). We had<br />
one implementation that included ice-tcp and was able to place calls from all<br />
positions in the networks. We encountered one b2bua that tried to change the<br />
candidates while forwarding messages if ice was not listed as a media feature<br />
tag in the corresponding registered contact, resulting in setup failure.<br />
Several UAs were not making optimal c-line address selections when behind nats<br />
and attempting to establish a session with a peer on a public IP that had no<br />
ICE support. These UAs could have provided relay or reflexive addresses, but<br />
offered their private address instead. A significant part of the sessions<br />
focused on when media should start flowing. There are several implementations<br />
doing something similar (but not conformant to) trickle-ICE, sending media<br />
before ICE is completed. These will be moving to trickle ICE when it's clear<br />
how to perform the signalling.<br />
<br />
The average SDP body is continuing to grow in length, with ICE being a strong<br />
contributor. A typical body captured during the multiparty tests was 4865 bytes long,<br />
offered 5 m= lines, and had a total of 27 a=candidate lines.<br />
<br />
When testing RFC3263 behavior, we configured DNS to return a very large number<br />
of SRV records for a given request. Most clients got a truncated DNS response<br />
over UDP, and didn't try DNS over tcp. Many clients were handling DNS<br />
configurations that included both IPv4 and IPv6 entries correctly. When a name<br />
had multiple A or AAAA records, client used only one (per query). For SRV<br />
record sets where multiple records had the same host, we had a discussion on<br />
whether or not to compress the list and only try the same IP/port once. The<br />
answer was NO - trust the DNS and test the same server/port multiple times.<br />
Some clients' dns libraries effectively followed CNAME after looking up SRVs<br />
instead of only searching A/AAAA (by paying too much attention to returned<br />
additional data), which the SRV specifications say not to do. <br />
<br />
During the event, we began advertising several IPv6 ULA prefixes. This exposed<br />
that some implementations had not differentiated address types when choosing<br />
which source address to use for a given message. We also saw the addresses<br />
included by several implementations when forming candidates in SDP.<br />
<br />
As in previous events, we sent broadcast INVITEs, and messages with broadcast<br />
addresses or loopback addresses in Contacts and Record-Route. As usual, there <br />
were a few implementations that responded to the INVITEs, or sent to the <br />
inappropriate address provided in various URIs.<br />
</pre></div>Rjs