So you want to play around with an IPv6-only network! Great! One problem, lots of content is still on IPv4 🙁 A possible solution or workaround? Using NAT64 & DNS64 to let IPv6-only hosts communicate with IPv4 hosts. The gist of it is that you send your DNS lookup requests to a caching DNS recursor that forges (or rather creates) AAAA records for hosts that do not naturally have them configured. That DNS server creates the AAAA records using an IPv6 range will be configured on the NAT64 portion. That NAT64 machine is going to handle the translation of IPv6 to IPv4.
I’ll use ISC’s BIND9 for the DNS64 component, and Tayga for the NAT64. Because Tayga is designed for Linux, this will cover setting up a Linux based NAT64/DNS64 system. Start by installing the latest stable BIND9 daemon on your target machine. Next, pick a range of IPv6 addresses to generate the AAAAs from. We’ll go with our standard documentation prefix and specify 2001:DB8:1:FFFF::/96. Assuming you have proper routing control over your network, you’ll statically route the /64 you carved it from to the NAT64 machine. A very simple BIND9 named.conf.options can look like this:
options {
directory "/var/cache/bind";
auth-nxdomain no;
listen-on-v6 { any; };
allow-query { any; };
dns64 2001:db8:1:ffff::/96 {
clients { any; };
};
};
You’ll want to lock down allow-query to whatever ranges/networks you plan on allowing DNS64 looksups with, as well as any specific IPv6 address you want it listening on. In the meantime, the above configuration will let any range query so you can test quickly. Start up the DNS daemon, and start making some queries against it for IPv4-only hostnames, they aren’t hard to find. Then try some queries for hosts you know have AAAA records. You’ll find that it doesn’t mangle them and will give you their proper AAAA record.
Next step is configuring Tayga. I’ve been installing both NAT64 and DNS64 components on the same machine, because I found that for a small network the load and traffic isn’t that much. So on the same machine, install Tayga from a package or source. Configure the tayga.conf file with:
tun-device nat64 ipv4-addr 192.168.0.1 prefix 2001:db8:1:ffff::/96 dynamic-pool 192.168.0.0/24
Next thing I did after reading the Tayga READMEs and FAQ on their site, was set up a quick shell script to fire off, eventually called from rc.local and called it start64.sh:
#!/bin/bash tayga --mktun ip link set nat64 up ip addr add 192.168.0.1 dev nat64 ip addr add 2001:db8:1::1 dev nat64 ip route add 192.168.0.0/24 dev nat64 ip route add 2001:db8:1:ffff::/96 dev nat64 iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i eth0 -o nat64 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i nat64 -o eth0 -j ACCEPT tayga /etc/init.d/bind start
So from an IPv6-only host, set your resolver to 2001:db8:1::1 (for this example), and try reaching an IPv4 only hostname with pings, traces, or connecting to the services run on it.
Now, to cover things that will need editing and additional configuration:
- IPv6 address(es) that BIND9 listens on
- IPv6 ranges that BIND9 will allow queries from
- actual IPv6 range used for NAT64
- adding ip6tables rules to restrict which IPv6 ranges are even allowed to use the NAT64 portion
Advanced tricks: BGP Anycast
So let us assume you want to try and blanket a network with these boxes for whatever reason, but perhaps the best of all: just to do it 🙂 Set aside a /48 to use for the NAT64/DNS64. Add on either BIRD or Quagga to the machine, and configure that to announce the /48 specific to an upstream router, ideally as part of your iBGP mesh and as a route-reflector client. Configure both BIND9 & Tayga to use a /96 out of that /48. Use the same config on all the machines that will act as anycast nodes. I’ve tested this between two locations by running a wget of an ISO from an IPv4-only hostname, and pulled the /48 announcement from the node I saw the traffic going over. The wget didn’t even hiccup, and instead reported “Read error at byte 504296054/4312793088 (Connection reset by peer). Retrying.” and then kept pulling down the file without issue. Perhaps more failover testing could be done, but I was happy with the results.
Caveat: IPv4 literals will not work since they aren’t hostnames with A records that can have AAAA records created.
hi there,
i have a question for NAT6to4 routing…
i have a internal network (Windows Servers/Clients) with IPv6 configured and using Linux OpenSuse as a routing/firewall server.
this is my settings
Linux machine FW001
network card eth0 = external network card which connect to the internet:
IP: x.x.5.2
subnet: /25 (255.255.255.128)
gw: x.x.5.1
dns: x.x.4.7 (gw already routes to x.x.4.0/24 network
network card eth1 = internal network card which connect to the clients and servers
2001:db8:21:100::1
firewall configured with MASQUERADE and IP-forwarding
radvd installed and routing advertisement/forwarding for IPv6 is enable
tayga installed & configured (need help with this)
i have internet connection here
Windows machine DC001
IP: 192.168.1.254 (needed because i’m planning of implementing a mailserver with exchange 2010, which still needs ipv4)
IP: 2001:db8:21:100::2
gw: 2001:db8:21:100::1
DNS: 2001:db8:21:100::2
DNS console is configured to forward to x.x.4.7 + has AAAA record of FW001
DHCPv6 deploys prefix to clients 2001:db8:21:100::/64
no internet connection here ! problem !
now i tried to configure tayga.conf as following:
tun-device NAT64
ipv4-addr 192.168.1.1 (TAYGA’s IP Address)
prefix 2001:db8:21:100::/64
dynamic-pool 192.168.1.0/24
data-dir /var/db/tayga
(before i start tayga i’m running in bash some commands)
tayga –mktun
ip link set NAT64 up
ip addr add 2001:db8:21:100::4 dev NAT64
ip addr add x.x.5.3 dev NAT64
ip route add 2001:db8:21:100::/64 dev NAT64
ip route add x.x.5.1/32 dev NAT64
iptables -t nat -A POSTROUTING -o eth0 -j ACCEPT
iptables -A FORWARD -i eth1 -o NAT64 -m state –state
RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i NAT64 -o eth0 -j ACCEPT
(now i start tayga with debug)
tayga -d
(gives the following)
starting TAYGA x.x.x
Using tun device NAT64 with MTU 1500
TAYGA’s IPv4 address: 192.168.1.1
TAYGA’s IPv6 address: 2001:db8:21:100:c0:a801:100:0 (not from my DHCPv6)
NAT64 prefix: 2001:db8:21:100::/64
Dynamic pool: 192.168.1.0/24
Loaded 0 dynamic maps from /var/db/tayga/dynamic.map
What have i done wrong? and what should i do to get internet access to the server/clients on the internal network?
In Advance thank you very much for your help.
Michael
Couple of things:
1) confirm that you are not actually configuring this with the IPv6 documentation prefix, but a proper globally routed prefix.
2) try using a different RFC1918 range for the NAT64 mapping from your internally used range.
3) are you using RADVD with the setting/flag to tell the machines to get their information from DHCPv6? Both kind of need to be used to compliment each other for connectivity.
4) are those listed DNS recursors running DNS64 to forge the AAAA records for destinations that don’t have them?
Hi Alex,
Thank you for your reaction & the information given.
I want to remind you that the DC001 (internal network) has the DNS records for internal IPv6 queries (the AAAA records are here see windows machine DC001 specification). From my internal DNS server I’m it to forwarding it to an DNS server on the external network side.
So basically the only thing I’m using Linux as Routing between my IPv6 internal network to the IPv4 external network… since the external network doesn’t support IPv6 i want it to route through the Linux NAT64. No it can’t do reverse lookup because i can’t make a route to the internet because TAYGA doesn’t route properly since i have little knowledge about the TAYGA configuration with IPv6 hosts.
You need to have a global IPv6 address in order to make contact to outside my own network prefix because in the end the IPv6 clients needs to have connection to the INTERNET using those Global address, if i use private addresses the end client will never be able to connect to the INTERNET.
note: the mapping in the internal network works between IPv6 addresses i can ping / tracert / nslookup / make shared maps using IPv6 hosts.
I’m using RADVD for routing advertisements allow for the internal network and IPv6 Forwarding which is needed with TAYGA in order to preform 6:4 NAT protocol.
i hope you understand me purpose here 🙂
if there is still some clearance please let me know your email address and I will show my project flowchart to let you know how it works.
Kind Regards,
Michael
In this case please help me; in which range my router IPv6 LAN gateway IP and NAT64 ubuntu server eth0 IP should be. I mean in your example what are the router IP and Server eth0 IP.
Please reply soon
If you mean the IPs for the WAN interface so the machine has real world IPv6 connectivity, then whatever your provider assigned. The assumption of what I wrote was that your Linux box, that would act as a nat64/dns64 server, already had working IPv6 connectivity. It would ideally be in a range completely different from the one that the /96 was carved out of.
Let’s say you use a tunnel from tunnelbroker.net. Your WAN interface for all intents and purposes is that tunnel interface. Your router/server IPs are the “Server IP” and “Client IP”. You would then use the ROUTED /64 that the broker provides, as the space to configure the nat64 /96 out of.
Dear Alex,
Thank you very much for your prompt reply.
my topology is like this.
Router WAN interface (has both IPv4 & IPv6 conectivity) Router LAN int (IPv4 192.168.0.254 & IPv6 X:X:X::1/64) ———> Switch ——–> Ubuntu 10.4 Server Ethernet int (IPv4 192.168.0.1 & IPv6 X:X:X::2/64)
please mention according to your tutorial what are the IPs you suggest for:
tayga.conf———–
ipv4-addr
prefix
dynamic-pool
commands——–
ip addr add __________ dev nat64
ip addr add __________ dev nat64
ip route add __________ dev nat64
ip route add ___________ dev nat64
Please reply.
Thank you,
So my example was based on a /48 being statically routed to the nat64 server (because I eventually went anycast and needed a /48 for minimum specific announced). Assuming you’ve statically routed a range different from uh… “X:X:X::/64” that you used between the router and server, like say “X:X:X:B::/64”, then I suppose it could look a bit like:
ipv4-addr = 192.168.1.254
prefix = X:X:X:B:FFFF:/96
dynamic-pool 192.168.1.0/24
ip addr add 192.168.1.254 dev nat64
ip addr add X:X:X:B::1 dev nat64
ip route add 192.168.1.0/24 dev nat64
ip route add X:X:X:B:FFFF::/96 dev nat64
Thanks for the great article. This is the best proof yet that others have gotten Tayga to work. :\
I am a complete IPV6 novice, but am an experienced Linux engineer. I have spent 2 days on this and cannot get it to work yet. It’s close.
So first a very basic question. In the Tayga example, they say “your router’s IPv6 address” and “your router’s IPV4 address”:
ip addr add 192.168.0.1 dev nat64
ip addr add 2001:db8:1::1 dev nat64
By “your”, they mean the IP[4,6] addresses of an external router, not “your” as in “belonging to the box you are building.” Thanks for clearing this up.
My task is to facilitate connectivity to an embedded v4 device from a Windows machine running a V6 only stack (native Win7 stack but V4 disabled). The V4 and V6 networks have to live on separate media, so I need to build a box with 2 NICs and I believe Linux + Tayga are the best bet for a stack.
My demo does not need to be complex:
windows_box—->v6_router–>tayga_box–>v4_router—>ipv4_device
I have followed the directions, though I am a little confused about the iptables requirements. More on that in a sec.
First the good news: While logged in to the box, I can ping the v4 device with a v6 IP address (the /96 network address + the v4 address). However, I cannot see the v4 device from the v6 device. The ping returns “destination host unreachable”. From looking at wireshark logs, it seems like this response is coming from my tayga machine. I also see lots of neighbor discovery packets that I do not really understand.
I am using the IP addresses directly, so I do not think I need to worry about the DNS side.
Any suggestions?
I do not know much about iptables. I suspect this is where my problem lies.
My network configuration:
eth0 has a v6/64 address statically assigned.
eth1 has a v4/24 address assigned by the v4 router’s dhcp server.
My v6 router is a D-Link DIR-857 running the latest firmware. I have tried also to just statically assign Ip addresses to the two machines on the v6 side and use a switch but that did not improve things.
Paid support is on offer. This is an important problem for our little company to solve. Thanks in advance for your help!!!!
You might want to paste your configs for review. Alternatively you can find a bunch of us that have configured Tayga in #ipv6 on irc.freenode.net.
Hi, tried pasting on Wednesday but it’s still not posting here.
OK dividing the response into two parts. Maybe it was too long…
Here is my tayga.conf:
dvb@dvb-dev:~$ cat /usr/local/etc/tayga.conf
tun-device nat64
ipv4-addr 192.168.255.1
prefix fd3d:ef6a:cc20:0:1:ffff::/96
dynamic-pool 192.168.255.0/24
data-dir /var/db/tayga
Here is my startup script:
dvb@dvb-dev:~$ cat tayga.sh
#!/bin/sh
tayga –mktun
ip link set nat64 up
ip addr add 192.168.1.1 dev nat64
ip addr add fd3d:ef6a:cc20:: dev nat64
ip route add 192.168.255.0/24 dev nat64
ip route add fd3d:ef6a:cc20:0:1:ffff::/96 dev nat64
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i eth0 -o nat64 -m state –state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i nat64 -o eth0 -j ACCEPT
tayga
My NIC configuration:
dvb@dvb-dev:~$ ip addr
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether e8:03:9a:1d:19:a6 brd ff:ff:ff:ff:ff:ff
inet6 fd3d:ef6a:cc20::222/64 scope global
valid_lft forever preferred_lft forever
inet6 2001:db8:0:3:0:1ff:0:2f/24 scope global
valid_lft forever preferred_lft forever
inet6 fe80::ea03:9aff:fe1d:19a6/64 scope link
valid_lft forever preferred_lft forever
3: wlan0: mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 88:53:2e:7e:fc:6b brd ff:ff:ff:ff:ff:ff
4: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:e0:20:11:0b:90 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.2/24 brd 192.168.1.255 scope global eth1
inet6 fe80::2e0:20ff:fe11:b90/64 scope link
valid_lft forever preferred_lft forever
dvb@dvb-dev:~$
I am using both the IPv4 and the IPv6 routers as switches. That is, the upstream port on each is unused.
IPv4 router is a Netgear WNR2000. It’s IP address on the LAN side is 192.168.1.1.
IPv6 Router is a D-Link DIR-857 aka HD MediaRouter 3000. Its ULA header is
fd3d:ef6a:cc20:0000::/64. I manually added the IPv6 address to my NIC using:
dvb@dvb-dev:~$ sudo ip addr add fd3d:ef6a:cc20::222/64 dev eth0
I verified I can talk to the router using ping6.
OK still experiencing some posting weirdness – But here is the rest of my notes:
The fd3d:ef6a:cc20:0::/64 header is defined in the router as it’s
“default ULA prefix”. I can change it to some other prefix if I want to, but
it has to be a /64 prefix.
FWIW my Ubuntu box does not get an address on this network automatically, but
a Windows7 box on the same LAN does get one when plugged in. Maybe that is
relevant.
Also, you see there is a 2001:db8:0/24 address on my v6 NIC. Ubuntu seems to
add that all by itself. Not sure where it is coming from.
Finally I noticed the v6 router is also be publishing a v4 route. I
cannot turn this off as far as I can tell. I removed the default route to
eth1:
dvb@dvb-dev:~$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
dvb@dvb-dev:~$
Distro: Ubuntu 12.04
kernel: 3.2.0-23-generic
hello alex,
i want ask about nat64 dns64
how does the configuration if my topology like that
server ims (ipv4 with bind9 was installed) –> gateway nat64/dns64 –> client uctimsclient (ipv6)
actually, i’ve installed tayga and totd for nat64 and dns64, but client ipv6 still can’t ping server ims ipv6. any idea about this?
thanks and regards
hadi
hello alex, please respond my question above. it’s my final project in college. pleaseeee.thanks
Hi,
I am new with ipv6, for lab testing with ipv6 / ipv4 tranlation, i use one linux machine which has 2 lan cards, i use 1 card with ipv4 connectivity which takes me to internet or my ipv4 network. i use 2nd card for ipv6 address.
And I installed tayga as per above information but still i m not clear with actual requirement.
Does i need to define same ipv4 pool in tayga.conf which we use in our local network ?
as i configure 10.104.1.10/24 gw 10.104.1.1 on my 1st lan card
so what will be my ipv6 addreess for testing purpose and what will be tyga.conf for me
I have another pc which connected with this machine so we can use as ipv6 machine act as client, so what ip and gw for this machine ?
Ben
Use a different rfc1918 range from your local network. Like if they are in 10/8 use 192.168/16.
As for IP/GW, if your tayga machine is going to act as the local IPv6 router as well, then you could use either static IP configurations, DHCPv6 or RADVD.
hi,
I ‘m new in NAT64 configurations. I have three systems A,B and C. In System A I have only ipv6 network, in system C I have only ipv4 network and in system B I have two NIC’s one is connected two system A via ipv6 address and the other is connected to system C via ipv4 address. I have used ip address in my network as shown below:
A: ip address: fd00::2/64
gateway: fd00::1/64
B: eth0:- fd00::1/64
eth1:- 192.10.10.45/24
C: ip address: 192.10.10.46/24
gateway: 192.10.10.45
I am using fedora 13 in system B. Please help me to configure Tayga so that I can connect my ipv4 machine from ipv6 machine. Also tell me what else configuration I need to do in my two ethernet interfaces in system B.
Is there any method to lock down the DNS64/NAT64 service (running on my local network) to only local clients inside the network. I don’t want to make my DNS64/NAT64 service available to the internet, only clients inside my local network.
Extra info: I am using tunnelbroker.net 6in4 tunnel service.
You’d use ip6tables and only allow your local prefix to connect to the NAT64 service. BIND/etc should already have built in allow/deny policy controls in the config. But you could also use ip6tables to manage that as well. I don’t have the rules handy since I don’t have a NAT64/DNS64 service running anymore.