this config is ok? is ok the order?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

this config is ok? is ok the order?

erdosain9
acl local_machines dst 192.168.1.0/24

###Kerberos Auth with ActiveDirectory###
auth_param negotiate program /lib64/squid/negotiate_kerberos_auth -s HTTP/squid.xxxxxxx.lan@xxxxxxx.LAN
auth_param negotiate children 25 startup=0 idle=1
auth_param negotiate keep_alive on

external_acl_type i-full %LOGIN /usr/lib64/squid/ext_kerberos_ldap_group_acl -g i-full@xxxxxxx.LAN
external_acl_type i-limitado %LOGIN /usr/lib64/squid/ext_kerberos_ldap_group_acl -g i-limitado@xxxxxxx.LAN

#GRUPOS
acl i-full external i-full
acl i-limitado external i-limitado

####Bloquea Publicidad ( http://pgl.yoyo.org/adservers/ )
acl ads dstdom_regex "/etc/squid/listas/ad_block.lst"
http_access deny ads
#deny_info TCP_RESET ads

####Streaming
acl youtube url_regex -i \.flv$
acl youtube url_regex -i \.mp4$
acl youtube url_regex -i watch?
acl youtube url_regex -i youtube
acl facebook url_regex -i facebook
acl facebook url_regex -i fbcdn\.net\/v\/(.*\.mp4)\?
acl facebook url_regex -i fbcdn\.net\/v\/(.*\.jpg)\?
acl facebook url_regex -i akamaihd\.net\/v\/(.*\.mp4)\?
acl facebook url_regex -i akamaihd\.net\/v\/(.*\.jpg)\?

##Dominios denegados
acl dominios_denegados dstdomain "/etc/squid/listas/dominios_denegados.lst"

#Puertos
acl SSL_ports port 443
acl SSL_ports port 8443
acl SSL_ports port 8080
acl SSL_ports port 20000
acl SSL_ports port 10000
acl SSL_ports port 2083

acl Safe_ports port 631         # httpCUPS
acl Safe_ports port 85
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 8443        # httpsalt
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
acl Safe_ports port 8080        # edesur y otros
acl Safe_ports port 2199 # radio
acl CONNECT method CONNECT


#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow i-limitado !dominios_denegados
http_access allow i-full !dominios_denegados
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 127.0.0.1:3128
http_port 192.168.1.215:3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myca.pem key=/etc/squid/ssl_cert/myca.pem

acl step1 at_step SslBump1

acl excludeSSL ssl::server_name_regex "/etc/squid/listas/excluidosSSL.lst"

ssl_bump peek step1
ssl_bump splice excludeSSL
ssl_bump bump all


# Uncomment and adjust the following to add a disk cache directory.
cache_dir diskd /var/spool/squid 15000 16 256
cache_mem 1000 MB
maximum_object_size_in_memory 1 MB

cache_swap_low 90
cache_swap_high 95

cache deny local_machines
quick_abort_min 1024 KB
quick_abort_max 2048 KB
quick_abort_pct 90

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid


#Your refresh_pattern
refresh_pattern -i \.jpg$ 30 0% 30 ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i ^http:\/\/www\.google\.com\/$ 0 20% 360 override-expire override-lastmod ignore-reload ignore-no-cache ignore-no-store reload-into-ims ignore-must-revalidate

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320

###ACTIVAR EN CASO DE "Connection reset by peer" EN MUCHOS HOST
via off
forwarded_for delete
###

#Pools para ancho de banda
delay_pools 5

#Ancho de Youtube
delay_class 1 2
delay_parameters 1 1000000/1000000 50000/256000
delay_access 1 allow i-limitado youtube !facebook
delay_access 1 deny all

#Ancho de Facebook
delay_class 2 2
delay_parameters 2 1000000/1000000 50000/256000
delay_access 2 allow i-limitado facebook !youtube
delay_access 2 deny all

#Ancho de banda YOUTUBE FULL
delay_class 3 1
delay_parameters 3 1000000/1000000
delay_access 3 allow i-full youtube !facebook
delay_access 3 deny all

#Ancho de banda LIMITADO
delay_class 4 3
delay_parameters 4 3000000/3000000 1000000/1000000 256000/512000
delay_access 4 allow i-limitado !youtube !facebook
delay_access 4 deny all

#Ancho de banda FULL
delay_class 5 3
delay_parameters 5 1500000/1500000 750000/750000 256000/512000
delay_access 5 allow i-full !youtube !facebook
delay_access 5 deny all

dns_nameservers 192.168.1.222 8.8.8.8
visible_hostname squid.xxxxxxx.lan

# try connecting to first 25 ips of a domain name
forward_max_tries 25

# fix some ipv6 errors (recommended to comment out)
dns_v4_first on

# c-icap integration
# -------------------------------------
# Adaptation parameters
# -------------------------------------
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_avi_req reqmod_precache icap://127.0.0.1:1344/squidclamav bypass=on
adaptation_access service_avi_req allow all
icap_service service_avi_resp respmod_precache icap://127.0.0.1:1344/squidclamav bypass=off
adaptation_access service_avi_resp allow all
# end integration
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: this config is ok? is ok the order?

Amos Jeffries
Administrator
The answer to your question really depends on what your policies are for
who and what the proxy can be used by.

The config tells one set of policies. But if those are not the one(s)
you actually want to happen, then the config is incorrect even if it
"looks okay".


If I assume that its doing what you want there are still two major
issues that can be seen.

1) Mixing interception and authentication (ssl-bump is a type of
interception, at least on the https:// traffic). Intercepted messages
cannot be authenticated - though there are some workarounds in place for
ssl-bump to authenticate the CONNECT tunnel and label all the bumped
traffic with that username.

2) using 8.8.8.8 directly in squid.conf can be amazingly harmful to
performance. Despite the hype and marketing around Google services, the
behaviour of this one is actively detrimental to HTTP persistant
connections feature - namely it load balances which of their endpoint
servers is handling each DNS query. As such Squid often sees domains
rotating to a completely different bunch of IP addresses every TTL,
which in turn means it cannot easily re-use any open connections to the
prior bunch of IPs. Resulting in a huge churn on TCP sockets and
unnecessary delays waiting for the new ones to open.


and there are a few minor polishing things you can doing you can do. But
its not worth spending time on them until you are sure the config
actually imposes your real wanted policy on the traffic.

Amos

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: this config is ok? is ok the order?

erdosain9
"If I assume that its doing what you want there are still two major
issues that can be seen."................. i think it was...

"1) Mixing interception and authentication (ssl-bump is a type of
interception, at least on the https:// traffic). Intercepted messages
cannot be authenticated - though there are some workarounds in place for
ssl-bump to authenticate the CONNECT tunnel and label all the bumped
traffic with that username."

how it's that?, maybe i wrong (probably) but, for example a connection to youtube, it is ssl, and i see (in access.log, who do that (its authenticate). So? im wrong no? why?

2)........ we have a dns server (192.168.1.222) that just have our internal dns names and then points to 8.8.8.8... that (192.168.1.222) dns server would it not be useful either?

sorry for ignorance and thanks
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: this config is ok? is ok the order?

Amos Jeffries
Administrator
On 02/06/17 01:10, erdosain9 wrote:

> "If I assume that its doing what you want there are still two major
> issues that can be seen."................. i think it was...
>
> "1) Mixing interception and authentication (ssl-bump is a type of
> interception, at least on the https:// traffic). Intercepted messages
> cannot be authenticated - though there are some workarounds in place for
> ssl-bump to authenticate the CONNECT tunnel and label all the bumped
> traffic with that username."
>
> how it's that?, maybe i wrong (probably) but, for example a connection to
> youtube, it is ssl, and i see (in access.log, who do that (its
> authenticate). So? im wrong no? why?

That is the hack workaround doing its thing. Squid is authenticating the
CONNECT message, then simply reporting that authenticated username for
all the bumped https:// log entries. In its current form/code it sort-of
works most of the time, but can break (start rejecting everything) if
there is ever even a slightest wobble in the credentials validity while
the bump of that tunnel is ongoing.


> 2)........ we have a dns server (192.168.1.222) that just have our internal
> dns names and then points to 8.8.8.8... that (192.168.1.222) dns server
> would it not be useful either?

The core issue is the speed at which that service rotates its response
IP lists, which is directly related to each request going to entirely
different server in their farm. Simply having a single (and maybe more
sane regarding TTLs) resolver as a networks focal point for the traffic
before it reaches out to the Google service seems to bring sanity back
to the performance.

Amos

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: this config is ok? is ok the order?

Alex Rousskov
On 06/01/2017 09:17 AM, Amos Jeffries wrote:
> On 02/06/17 01:10, erdosain9 wrote:
>> "If I assume that its doing what you want there are still two major
>> issues that can be seen."................. i think it was...
>>
>> "1) Mixing interception and authentication (ssl-bump is a type of
>> interception, at least on the https:// traffic). Intercepted messages
>> cannot be authenticated - though there are some workarounds in place for
>> ssl-bump to authenticate the CONNECT tunnel and label all the bumped
>> traffic with that username."

Bumped messages cannot be proxy-authenticated but the CONNECT tunnels
that carry bumped messages can be, and such proxy authentication does
not violate any rules or principles. It is perfectly fine to use.
Furthermore, logging the authenticated tunnel user when logging
transactions inside that tunnel is the right thing to do IMO.


>> how it's that?, maybe i wrong (probably) but, for example a connection to
>> youtube, it is ssl, and i see (in access.log, who do that (its
>> authenticate).
>
> That is the hack workaround doing its thing. Squid is authenticating the
> CONNECT message, then simply reporting that authenticated username for
> all the bumped https:// log entries.

FWIW, I do not think this is a hack. It is exactly what Squid should be
doing in this context. There may be bugs in the implementation of that
functionality, of course, but the functionality itself is a legitimate
feature, not a workaround.

Alex.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: this config is ok? is ok the order?

erdosain9
In reply to this post by Amos Jeffries
Amos Jeffries wrote
The core issue is the speed at which that service rotates its response
IP lists, which is directly related to each request going to entirely
different server in their farm. Simply having a single (and maybe more
sane regarding TTLs) resolver as a networks focal point for the traffic
before it reaches out to the Google service seems to bring sanity back
to the performance.
Ok, thanks.
mmm... and what you think about this

dig -x google.com

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> -x google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 25260
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4000
;; QUESTION SECTION:
;com.google.in-addr.arpa. IN PTR

;; AUTHORITY SECTION:
in-addr.arpa. 900 IN SOA b.in-addr-servers.arpa. nstld.iana.org. 2017042647 1800 900 604800 3600

;; Query time: 1 msec
;; SERVER: 192.168.1.222#53(192.168.1.222)
;; WHEN: lun jun 05 12:37:03 ART 2017
;; MSG SIZE  rcvd: 120

We have, little time? about 15', this is a problem, dont you think?
Or im using bad dig??
what would be a good value???
Thanks againg.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: this config is ok? is ok the order?

Antony Stone
On Monday 05 June 2017 08:24:00 erdosain9 wrote:

> mmm... and what you think about this
>
> dig -x google.com

Firstly, I think "what is this supposed to mean?"

dig -x is for reverse lookups - you give it an address and it tells you what
name has been assigned in a PTR record (as opposed to a forward lookup, which
takes a name and tells you what address/s have been assigned in A or AAAA
records).

Therefore "dig -x" with a name makes no sense to me.

> ;; AUTHORITY SECTION:
> in-addr.arpa. 900 IN SOA b.in-addr-servers.arpa. nstld.iana.org.
2017042647
> 1800 900 604800 3600

> We have, little time? about 15', this is a problem, dont you think?

That's the TTL for lookups on a *root* name server - in fact I'm slightly
surprised it's even as long as 15 minutes.

A large TTL on root name servers would prevent the rest of the Internet from
finding out about new name/address assignments in a timely manner.

> Or im using bad dig??

Yes, I think that's the problem.

> what would be a good value???

What would be a good value for what?

What are you trying to achieve / find out?


Antony.

--
I lay awake all night wondering where the sun went, and then it dawned on me.

                                                   Please reply to the list;
                                                         please *don't* CC me.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: this config is ok? is ok the order?

erdosain9
Hi. For what I understood. It is important ttl of dns names. So, I wanted to know when the squid server would ask for resolution again. That is, how long was the record kept.
Thanks

pd.:whitout -x

[root@squid ~]# dig yahoo.com

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> yahoo.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6258
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4000
;; QUESTION SECTION:
;yahoo.com. IN A

;; ANSWER SECTION:
yahoo.com. 590 IN A 98.138.253.109
yahoo.com. 590 IN A 98.139.183.24
yahoo.com. 590 IN A 206.190.36.45

;; Query time: 4 msec
;; SERVER: 192.168.1.222#53(192.168.1.222)
;; WHEN: lun jun 05 16:00:44 ART 2017
;; MSG SIZE  rcvd: 86

[root@squid ~]# dig pijamasurf.com

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> pijamasurf.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17497
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4000
;; QUESTION SECTION:
;pijamasurf.com. IN A

;; ANSWER SECTION:
pijamasurf.com. 299 IN A 104.24.25.112
pijamasurf.com. 299 IN A 104.24.26.112

;; Query time: 71 msec
;; SERVER: 192.168.1.222#53(192.168.1.222)
;; WHEN: lun jun 05 16:02:15 ART 2017
;; MSG SIZE  rcvd: 75


I wish I could put a bigger ttl to avoid being asked every "little amount of time" by one address. For example pijamasurf.com = 299 and yahoo = 590, so who manage that time?? how can i put more time to live?
Or does this make no sense?
Maybe I did not understand Amos's comment. (I thought I read better English :-))
Thanks
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: this config is ok? is ok the order?

Antony Stone
On Monday 05 June 2017 11:50:42 erdosain9 wrote:

> Hi. For what I understood. It is important ttl of dns names.

Yes, TTL is important.  It tells caching DNS servers how long they may
remember the last answer they got from the authoritative server, before they
need to ask the authoritative server again.

> So, I wanted to know when the squid server would ask for resolution again.

Well, that's a different question.

Q: When will Squid ask [its configured name server] for resolution again?
A: When it needs to know the answer again.

Q: When will the [recursive] DNS server which Squid asks, ask for resolution
again?
A: When the TTL has expired.

> That is, how long was the record kept.

That is the TTL.

> ;; ANSWER SECTION:
> yahoo.com. 590 IN A 98.138.253.109

> ;; ANSWER SECTION:
> pijamasurf.com. 299 IN A 104.24.25.112

> I wish I could put a bigger ttl to avoid being asked every "little amount of
> time" by one address.

Why?  What does it matter to you that Yahoo asks your DNS server to refresh
its results no more than 30 minutes after the last time (your example of 590
fails to mention that you clearly asked your local name server for yahoo.com
1210 seconds previously).  If you want to know the real TTL, ask an
authoritative name server:

$ dig @ns1.yahoo.com. yahoo.com

;; ANSWER SECTION:
yahoo.com.              1800    IN      A       98.139.183.24

If you only ask your local caching server, all you are finding out is how much
longer its cached answer is valid for, before it will ask (the authoritative
servers) again.

> For example pijamasurf.com = 299 and yahoo = 590, so
> who manage that time??

Whoever maintains the zone files (DNS records) for those domains.

> how can i put more time to live?

You cannot (and should not).

> Or does this make no sense?

Why do you want to change the TTL on somebody else's domain?

What (do you think) is the benefit for you?

> Maybe I did not understand Amos's comment.

Please repeat the comment which led you to trying to change the TTL of other
people's domains - maybe that will help us better understand what you are
trying to achieve,


Antony.

--
Never automate fully anything that does not have a manual override capability.
Never design anything that cannot work under degraded conditions in emergency.

                                                   Please reply to the list;
                                                         please *don't* CC me.

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: this config is ok? is ok the order?

Alex Rousskov
On 06/05/2017 01:24 PM, Antony Stone wrote:
> On Monday 05 June 2017 11:50:42 erdosain9 wrote:
>> So, I wanted to know when the squid server would ask for resolution again.

> Q: When will Squid ask [its configured name server] for resolution again?
> A: When it needs to know the answer again.

This answer is incorrect because Squid has its own DNS cache.

Squid pays attention to DNS record TTLs. I believe there are several
bugs in that code, but, in many cases, Squid will not cache the answer
for less than negative_dns_ttl and for longer than the minimum of the
received TTL and positive_dns_ttl. DNS answers may be purged from the
Squid DNS cache for reasons other than TTL, of course.


>> how can i put more time to live?

> You cannot (and should not).

One can increase effective TTL in Squid DNS cache by increasing
negative_dns_ttl. Whether one _should_ do that is a different question
that I cannot answer in general.


HTH,

Alex.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: this config is ok? is ok the order?

Amos Jeffries
Administrator
In reply to this post by Antony Stone
On 06/06/17 07:24, Antony Stone wrote:

> On Monday 05 June 2017 11:50:42 erdosain9 wrote:
>
>> Hi. For what I understood. It is important ttl of dns names.
> Yes, TTL is important.  It tells caching DNS servers how long they may
> remember the last answer they got from the authoritative server, before they
> need to ask the authoritative server again.
>
>> So, I wanted to know when the squid server would ask for resolution again.
> Well, that's a different question.
>
> Q: When will Squid ask [its configured name server] for resolution again?
> A: When it needs to know the answer again.
>
> Q: When will the [recursive] DNS server which Squid asks, ask for resolution
> again?
> A: When the TTL has expired.
>
>> That is, how long was the record kept.
> That is the TTL.
>
>> ;; ANSWER SECTION:
>> yahoo.com. 590 IN A 98.138.253.109
>> ;; ANSWER SECTION:
>> pijamasurf.com. 299 IN A 104.24.25.112
>> I wish I could put a bigger ttl to avoid being asked every "little amount of
>> time" by one address.
> Why?  What does it matter to you that Yahoo asks your DNS server to refresh
> its results no more than 30 minutes after the last time (your example of 590
> fails to mention that you clearly asked your local name server for yahoo.com
> 1210 seconds previously).  If you want to know the real TTL, ask an
> authoritative name server:
>
> $ dig @ns1.yahoo.com. yahoo.com
>
> ;; ANSWER SECTION:
> yahoo.com.              1800    IN      A       98.139.183.24
>
> If you only ask your local caching server, all you are finding out is how much
> longer its cached answer is valid for, before it will ask (the authoritative
> servers) again.
>
>> For example pijamasurf.com = 299 and yahoo = 590, so
>> who manage that time??
> Whoever maintains the zone files (DNS records) for those domains.
>
>> how can i put more time to live?
> You cannot (and should not).
>
>> Or does this make no sense?
> Why do you want to change the TTL on somebody else's domain?
>
> What (do you think) is the benefit for you?
>
>> Maybe I did not understand Amos's comment.
> Please repeat the comment which led you to trying to change the TTL of other
> people's domains - maybe that will help us better understand what you are
> trying to achieve,
>

I suspect it was this comment:

> The core issue is the speed at which that service rotates its response
> IP lists, which is directly related to each request going to entirely
> different server in their farm. Simply having a single (and maybe more
> sane regarding TTLs) resolver as a networks focal point for the
> traffic before it reaches out to the Google service seems to bring
> sanity back to the performance.

What I meant there was using a resolver that obeys the domain TTL it
gets given and stores the result until that TTL expires.

The way the Google service load balances does not allow that to happen -
your users query will reach a different server to your Squid query and
to your test query later - all of which probably have different TTL
values coming back (as you saw in that dig result 590 != 1800 and 299 !=
1800 ... the Google service is just a farm of recursive resolvers all
with different cache contents). By having your own resolver between you
and the Google service/farm that resolver takes only _one_ TTL period at
a time from Google - and delivers all your clients and Squid etc that
result until its single TTL period expires. After which it asks Google
again and gets just one TTL to follow, and so on.


Amos

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: this config is ok? is ok the order?

erdosain9
oh ok!
so... dosent have any sense try to have a big ttl?
because ok, if i use just a own dns resolver then "they" have just one ttl and no one for each user.
But, would not be better have long ttl???
the ip attached to a domain name it's changing so quickly (15', for example)?? i dont understand that. because if it is not changing so quickly why those values so lows??
Thanks (and again... sorry... for...my...ignorance... and my bad writing)
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: this config is ok? is ok the order?

Alex Rousskov
On 06/06/2017 08:16 AM, erdosain9 wrote:

> if it is not changing so quickly why those values so lows??

Low TTLs for stable DNS records can be used for several reasons, including:

* To track DNS resolvers asking for those records.
* To be able to change those DNS records on a short notice.
* To improve round-robin effects of multiple IP addresses.

Alex.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Loading...