Windows Updates a Caching Stub zone, A windows updates store.

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
33 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Eliezer Croitoru
Hey Omid,

I will comment inline.
And there are couple details which we need to understand couple issues.

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]


-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Omid Kosari
Sent: Monday, July 25, 2016 12:15 PM
To: [hidden email]
Subject: Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

Hi,

Thanks for support .

recently i have seen a problem with version beta 0.2 . when fetcher is working the kernel logs lots of following error
TCP: out of memory -- consider tuning tcp_mem

# To verify the actual status we need the output of:
$ free -m
$ cat /proc/sys/net/ipv4/tcp_mem
$ top -n1 -b
$ cat /proc/net/sockstat
$ cat /proc/sys/net/ipv4/tcp_max_orphans

I think the problem is about orphaned connections which i mentioned before .
Managed to try new version to see what happens.

# If you have an orphaned connections on the machine with or without the MS updates proxy, you should consider to analyze the machine structure and load in general.
If indeed there are orphan connections we need to verify if it's from the squid or my service or the combination of them together.


Also i have a feature request . Please provide a configuration file for example in /etc/foldername or even beside the binary files to have selective options for both fetcher and logger.

# With what options for the logger and fetcher?

I have seen following change log
beta 0.3 - 19/07/2016
+ Upgraded the fetcher to honour private and no-store cache-control  headers
when fetching objects.

As my point of view the more hits is better and there is no problem to store private and no-store objects if it helps to achieve more hits and bandwidth saving . So it would be fine to have an option in mentioned config file to change it myself .

# I understand your way of looking at things but this is a very wrong way to look at cache and store.
The problem with storing private and no-store responses is very simple.
These files are temporary and exists for one request only(in most cases).
Specifically for MS it is true and they do not use private files more then once.
I do not wish to offend you or anyone by not honoring such a request but since it's a public service this is the definition of it.
If you want to see the options of the fetcher and the service just add the "-h" option to see the available options.

I have considered to use some log file but yet to get to the point which I have a specific format that I want to work with.
I will try to see what can be done with log files and also what should be done to handle log rotation.

Thanks again


## Resources
* http://blog.tsunanet.net/2011/03/out-of-socket-memory.html

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Omid Kosari
In reply to this post by Eliezer Croitoru
Hey Eliezer,

According to these threads
http://squid-web-proxy-cache.1019090.n4.nabble.com/range-offset-limit-not-working-as-expected-td4679355.html

http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-td4670189.html

Is there any chance that you implement something that may be used for other (206 partial) popular sites like download.cdn.mozilla.net . I think it has also same problem as windows update and has lots of uncachable requests .

Thanks in advance .
Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Eliezer Croitoru
Hey Omid,

For now the software is restricted only to windows updates which is protected and secured enough to sustain caching.
About Mozilla, I need to verify it before I am doing anything about it.
From my point of view it is hosted on Akamai and HSTS is restricting couple things on their service.
I will try to look at it later without any promises.

Do you have any starting points else then the domain itself?
Have you tried to analyze some logs?

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]


-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Omid Kosari
Sent: Tuesday, September 6, 2016 5:48 PM
To: [hidden email]
Subject: Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

Hey Eliezer,

According to these threads
http://squid-web-proxy-cache.1019090.n4.nabble.com/range-offset-limit-not-working-as-expected-td4679355.html

http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-td4670189.html

Is there any chance that you implement something that may be used for other
(206 partial) popular sites like download.cdn.mozilla.net . I think it has also same problem as windows update and has lots of uncachable requests .

Thanks in advance .



--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4679373.html
Sent from the Squid - Users mailing list archive at Nabble.com.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Omid Kosari
In reply to this post by Eliezer Croitoru
Hey Eliezer,

Recently i have found that the fetcher script is very busy and it is always downloading . It seems that microsoft changed something . I am not sure and it is just a guess .

Whats up at your servers ?
Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Eliezer Croitoru
I am not using it daily but I know that MS updates have more then one language support and it's pretty simple to verify the subject.
Just send me privately the tar of the requests and headers files and I will try to see if something got changed or not.
I do not believe that MS will change their system to such extent but I have not issue looking at the subject and try to verify what is causing for the load on your side.
Just so you know that it might take me time to verify the issue.

Also what is busy for you?
Are you using a lock file ?( it might be possible that your server is downloading in some loop and this is what causing this load)
Did you upgraded to the latest version?

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]



-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Omid Kosari
Sent: Thursday, April 6, 2017 5:39 PM
To: [hidden email]
Subject: Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

Hey Eliezer,

Recently i have found that the fetcher script is very busy and it is always downloading . It seems that microsoft changed something . I am not sure and it is just a guess .

Whats up at your servers ?



--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4682002.html
Sent from the Squid - Users mailing list archive at Nabble.com.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Omid Kosari
Thanks for reply.

Eliezer Croitoru wrote
Also what is busy for you?
The fetcher script is always downloading . For example right now i can see that a fetcher script is running for more than 3 days and it is downloading files one by one .

Eliezer Croitoru wrote
Also what is busy for you?
Are you using a lock file ?( it might be possible that your server is downloading in some loop and this is what causing this load)
Yes . Everything looks fine in that mean .

Eliezer Croitoru wrote
Did you upgraded to the latest version?
Yes


I will send you the files .

Thanks
Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Omid Kosari
In reply to this post by Eliezer Croitoru
Hello,

I have sent the files you mentioned to your email 2 days ago .

A little more investigation shows that some big files (~ 2GB ) are downloading slowly ( ~ 100KBytes/s) while some others downloading very faster. The problem is related to networking (BGP and IXP ) stuff and the fetcher script can not solve that .

But is there a way to run more than one fetcher script at the same time to parallel downloading and not one by one ? There is free bandwidth but fetcher script takes a long time for some downloads .

Thanks again for you support
Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Eliezer Croitoru
Did you got my answer?
You should be able to dispatch more  then one fetcher but you should somehow manage them and restrict their amount and dispatch rate.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]



-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Omid Kosari
Sent: Saturday, April 15, 2017 2:52 PM
To: [hidden email]
Subject: Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

Hello,

I have sent the files you mentioned to your email 2 days ago .

A little more investigation shows that some big files (~ 2GB ) are downloading slowly ( ~ 100KBytes/s) while some others downloading very faster. The problem is related to networking (BGP and IXP ) stuff and the fetcher script can not solve that .

But is there a way to run more than one fetcher script at the same time to parallel downloading and not one by one ? There is free bandwidth but fetcher script takes a long time for some downloads .

Thanks again for you support



--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4682113.html
Sent from the Squid - Users mailing list archive at Nabble.com.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Omid Kosari
Hello,

Thanks
Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Omid Kosari
In reply to this post by Eliezer Croitoru
Hello,

The old problem appears again because i am running 6 instances at same time.

TCP: out of memory -- consider tuning tcp_mem

Here is commands you mentioned .



root@squidbox:~#  free -m
              total        used        free      shared  buff/cache   available
Mem:          16037       13503         147         171        2386        1652
Swap:          8179         576        7603
root@squidbox:~# cat /proc/sys/net/ipv4/tcp_mem
381630  508844  763260
root@squidbox:~# top -n1 -b
top - 15:07:11 up 2 days, 10:04,  1 user,  load average: 3.50, 3.24, 3.13
Tasks: 190 total,   2 running, 188 sleeping,   0 stopped,   0 zombie
%Cpu(s): 19.1 us,  6.2 sy,  0.5 ni, 67.5 id,  1.9 wa,  0.0 hi,  4.8 si,  0.0 st
KiB Mem : 16422248 total,   166088 free, 13829624 used,  2426536 buff/cache
KiB Swap:  8376316 total,  7786056 free,   590260 used.  1690840 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1620 proxy     20   0 3977368 3.606g   4708 S  46.7 23.0   1272:29 squid
 4451 root      20   0  632240 592500    584 S  26.7  3.6 404:11.34 ms-updates-fetc
 2573 root      20   0  636068 594116    612 S  20.0  3.6 403:12.45 ms-updates-fetc
 3661 root      20   0  646392 607532    500 S  20.0  3.7 427:31.70 ms-updates-fetc
 4171 root      20   0  656876 619500    604 S  20.0  3.8 460:54.17 ms-updates-fetc
 3934 root      20   0  642160 604896    492 S  13.3  3.7 411:05.97 ms-updates-fetc
 4555 root      20   0  656732 619700   1548 R  13.3  3.8 478:16.31 ms-updates-fetc
14746 root      20   0   40552   3464   2892 R   6.7  0.0   0:00.01 top
    1 root      20   0  185784   4972   3112 S   0.0  0.0   0:12.01 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.69 kthreadd
    3 root      20   0       0      0      0 S   0.0  0.0   0:16.70 ksoftirqd/0
    5 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/0:0H
    7 root      20   0       0      0      0 S   0.0  0.0   7:11.28 rcu_sched
    8 root      20   0       0      0      0 S   0.0  0.0   0:00.00 rcu_bh
    9 root      rt   0       0      0      0 S   0.0  0.0   0:00.23 migration/0
   10 root      rt   0       0      0      0 S   0.0  0.0   0:00.54 watchdog/0
   11 root      rt   0       0      0      0 S   0.0  0.0   0:00.50 watchdog/1
   12 root      rt   0       0      0      0 S   0.0  0.0   0:00.22 migration/1
   13 root      20   0       0      0      0 S   0.0  0.0   0:52.29 ksoftirqd/1
   15 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/1:0H
   16 root      rt   0       0      0      0 S   0.0  0.0   0:00.48 watchdog/2
   17 root      rt   0       0      0      0 S   0.0  0.0   0:00.40 migration/2
   18 root      20   0       0      0      0 S   0.0  0.0  10:22.44 ksoftirqd/2
   20 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/2:0H
   21 root      rt   0       0      0      0 S   0.0  0.0   0:00.49 watchdog/3
   22 root      rt   0       0      0      0 S   0.0  0.0   0:00.24 migration/3
   23 root      20   0       0      0      0 S   0.0  0.0   0:16.35 ksoftirqd/3
   25 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/3:0H
   26 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kdevtmpfs
   27 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 netns
   28 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 perf
   29 root      20   0       0      0      0 S   0.0  0.0   0:00.13 khungtaskd
   30 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 writeback
   31 root      25   5       0      0      0 S   0.0  0.0   0:00.00 ksmd
   32 root      39  19       0      0      0 S   0.0  0.0   0:35.21 khugepaged
   33 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 crypto
   34 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kintegrityd
   35 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
   36 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kblockd
   37 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 ata_sff
   38 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 md
   39 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 devfreq_wq
   43 root      20   0       0      0      0 S   0.0  0.0  81:22.18 kswapd0
   44 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 vmstat
   45 root      20   0       0      0      0 S   0.0  0.0   0:00.17 fsnotify_mark
   46 root      20   0       0      0      0 S   0.0  0.0   0:00.00 ecryptfs-kthrea
   62 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kthrotld
   63 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 acpi_thermal_pm
   64 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
   65 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
   66 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
   67 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
   68 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
   69 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
   70 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
   71 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
   72 root      20   0       0      0      0 S   0.0  0.0   0:00.02 scsi_eh_0
   73 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 scsi_tmf_0
   74 root      20   0       0      0      0 S   0.0  0.0   0:00.01 scsi_eh_1
   75 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 scsi_tmf_1
   77 root      20   0       0      0      0 S   0.0  0.0   0:00.04 scsi_eh_2
   78 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 scsi_tmf_2
   79 root      20   0       0      0      0 S   0.0  0.0   0:00.01 scsi_eh_3
   80 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 scsi_tmf_3
   88 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 ipv6_addrconf
  102 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 deferwq
  103 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 charger_manager
  108 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
  110 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
  111 root       0 -20       0      0      0 S   0.0  0.0   0:08.58 kworker/0:1H
  112 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
  113 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
  115 root       0 -20       0      0      0 S   0.0  0.0   0:01.65 kworker/2:1H
  116 root       0 -20       0      0      0 S   0.0  0.0   0:06.23 kworker/1:1H
  117 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
  163 root      20   0       0      0      0 S   0.0  0.0   0:00.00 scsi_eh_4
  164 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 scsi_tmf_4
  170 root       0 -20       0      0      0 S   0.0  0.0   0:19.22 kworker/3:1H
  171 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kpsmoused
  237 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 ttm_swap
  432 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
  433 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
  434 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
  435 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
  700 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kdmflush
  703 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
  729 root      20   0       0      0      0 S   0.0  0.0   0:17.71 jbd2/dm-0-8
  730 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 ext4-rsv-conver
  776 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kauditd
  779 root      20   0   43924   5732   3444 S   0.0  0.0   7:32.43 systemd-journal
  800 root      20   0  102976    516    516 S   0.0  0.0   0:00.00 lvmetad
  806 root      20   0   45616   2432   2000 S   0.0  0.0   0:12.24 systemd-udevd
  983 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kvm-irqfd-clean
 1003 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 reiserfs/sdd1
 1004 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 reiserfs/sdc1
 1005 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 reiserfs/sdf1
 1012 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 reiserfs/sdg1
 1016 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 reiserfs/sdi1
 1022 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 reiserfs/sdh1
 1024 root      20   0       0      0      0 S   0.0  0.0   0:10.50 jbd2/sde1-8
 1025 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 ext4-rsv-conver
 1032 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 reiserfs/sdb1
 1039 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kdmflush
 1040 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 bioset
 1048 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 ext4-rsv-conver
 1138 systemd+  20   0  100324   1592   1548 S   0.0  0.0   0:00.20 systemd-timesyn
 1253 root      20   0   28644   2328   2128 S   0.0  0.0   0:02.90 systemd-logind
 1255 syslog    20   0  256396   2276   1148 S   0.0  0.0   2:15.29 rsyslogd
 1263 root      20   0    4400   1012   1012 S   0.0  0.0   0:00.00 acpid
 1267 root      20   0   27324    948    620 S   0.0  0.0   0:00.88 smartd
 1271 message+  20   0   43028   1228    892 S   0.0  0.0   0:10.03 dbus-daemon
 1320 root      20   0  274588    592    168 S   0.0  0.0   1:05.66 accounts-daemon
 1329 root      20   0   27736   1764   1656 S   0.0  0.0   0:04.45 cron
 1334 whoopsie  20   0  345040    972    432 S   0.0  0.0   0:00.09 whoopsie
 1338 root      20   0   29888    212    212 S   0.0  0.0   0:00.00 cgmanager
 1350 daemon    20   0   26052   1112   1084 S   0.0  0.0   0:00.00 atd
 1352 root      20   0  472528  47288   3292 S   0.0  0.3  42:29.62 ms-updates-logg
 1473 root      20   0   19480   1548   1440 S   0.0  0.0   0:06.55 irqbalance
 1478 proxy     20   0    7464    284      0 S   0.0  0.0   0:00.25 polipo
 1488 root      20   0   18852      0      0 S   0.0  0.0   0:00.00 daemon
 1489 root      20   0    4508   1576   1484 S   0.0  0.0   0:00.27 megaraidsas-sta
 1492 root      20   0   18852      0      0 S   0.0  0.0   0:00.00 daemon
 1493 root      20   0    4508   1576   1496 S   0.0  0.0   0:00.23 megaclisas-stat
 1544 dnsmasq   20   0   51596   1544   1432 S   0.0  0.0   0:32.33 dnsmasq
 1618 root      20   0  129492    112    112 S   0.0  0.0   0:00.00 squid
 1624 root      20   0  277096   1036    732 S   0.0  0.0   0:00.02 polkitd
 1781 proxy     20   0   30224     36     36 S   0.0  0.0   1:29.41 log_file_daemon
 1932 vnstat    20   0    7536   1860   1788 S   0.0  0.0   0:06.87 vnstatd
 1935 root      20   0   65528   1680   1568 S   0.0  0.0   0:00.57 sshd
 1995 debian-+  20   0   97576  40352   4080 S   0.0  0.2   3:08.26 tor
 2008 root      20   0   14660   1036   1036 S   0.0  0.0   0:00.00 agetty
 2025 snmp      20   0   58552   5260   2724 S   0.0  0.0   2:23.73 snmpd
 2060 root      20   0  441600  11468   3444 S   0.0  0.1   3:00.98 fail2ban-server
 2065 root      20   0   92552   4496   2564 S   0.0  0.0   0:04.84 miniserv.pl
 2555 root      20   0   67456   1516   1476 S   0.0  0.0   0:00.00 cron
 2556 root      20   0    4508    348    348 S   0.0  0.0   0:00.00 sh
 2562 root      20   0   11236   1348   1348 S   0.0  0.0   0:00.00 bash
 2577 snmp      20   0   18804   2972   2508 S   0.0  0.0   0:11.78 perl
 3635 root      20   0   67456   1528   1488 S   0.0  0.0   0:00.00 cron
 3638 root      20   0    4508    376    376 S   0.0  0.0   0:00.00 sh
 3641 root      20   0   11236   1380   1380 S   0.0  0.0   0:00.00 bash
 3916 root      20   0   67456   1528   1488 S   0.0  0.0   0:00.00 cron
 3919 root      20   0    4508    244    244 S   0.0  0.0   0:00.00 sh
 3922 root      20   0   11236   1312   1312 S   0.0  0.0   0:00.00 bash
 4145 root      20   0   67456   1528   1488 S   0.0  0.0   0:00.00 cron
 4154 root      20   0    4508    400    400 S   0.0  0.0   0:00.00 sh
 4158 root      20   0   11236   1356   1356 S   0.0  0.0   0:00.00 bash
 4433 root      20   0   67456   1528   1488 S   0.0  0.0   0:00.00 cron
 4441 root      20   0    4508    332    332 S   0.0  0.0   0:00.00 sh
 4445 root      20   0   11236   1372   1372 S   0.0  0.0   0:00.00 bash
 4529 root      20   0   67456   1528   1488 S   0.0  0.0   0:00.00 cron
 4533 root      20   0    4508    324    324 S   0.0  0.0   0:00.00 sh
 4538 root      20   0   11236   1380   1380 S   0.0  0.0   0:00.00 bash
 9791 root      20   0 2100128   3268   3084 S   0.0  0.0   0:00.08 console-kit-dae
12173 root      20   0       0      0      0 S   0.0  0.0   0:00.93 kworker/u32:0
12997 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/3:0
13232 root      20   0       0      0      0 S   0.0  0.0   0:01.73 kworker/2:0
13458 root      20   0       0      0      0 S   0.0  0.0   0:01.48 kworker/0:1
13475 root      20   0       0      0      0 S   0.0  0.0   0:01.45 kworker/3:1
13558 root      20   0       0      0      0 S   0.0  0.0   0:00.34 kworker/u32:1
13656 root      20   0       0      0      0 S   0.0  0.0   0:01.12 kworker/1:1
13658 root      20   0       0      0      0 S   0.0  0.0   0:00.40 kworker/2:2
13809 root      20   0       0      0      0 S   0.0  0.0   0:00.16 kworker/u32:3
13896 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/1:2
13898 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/0:0
14055 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/2:1
14103 root      20   0       0      0      0 S   0.0  0.0   0:00.26 kworker/u32:4
14411 root      20   0  105996   6804   5800 S   0.0  0.0   0:00.03 sshd
14413 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/3:2
14414 omid      20   0   45256   4452   3760 S   0.0  0.0   0:00.08 systemd
14417 omid      20   0  225564   2144      0 S   0.0  0.0   0:00.00 (sd-pam)
14521 omid      20   0  105996   3424   2424 S   0.0  0.0   0:00.00 sshd
14522 omid      20   0   21288   5116   3252 S   0.0  0.0   0:00.08 bash
14537 root      20   0   70080   4036   3344 S   0.0  0.0   0:00.01 sudo
14538 root      20   0   21476   5248   3192 S   0.0  0.0   0:00.05 bash
14554 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/2:3
14639 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/1:0
14647 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/0:2
14703 root      20   0    6012    660    588 S   0.0  0.0   0:00.00 sleep
14740 root      20   0    6012    764    692 S   0.0  0.0   0:00.00 sleep
14742 root      20   0   67456   3436   2944 S   0.0  0.0   0:00.00 cron
14743 root      20   0    4508    760    680 S   0.0  0.0   0:00.00 sh
14744 root      20   0    6036    672    600 S   0.0  0.0   0:00.00 iostat
14745 root      20   0   15032   1108   1000 S   0.0  0.0   0:00.00 sed
22792 proxy     20   0   26420   6352   1392 S   0.0  0.0   0:33.16 storeid_file_re
22793 proxy     20   0   26420   6360   1400 S   0.0  0.0   0:00.68 storeid_file_re
22794 proxy     20   0   26420   6372   1408 S   0.0  0.0   0:00.20 storeid_file_re
22795 proxy     20   0   26420   6420   1456 S   0.0  0.0   0:00.15 storeid_file_re
22796 proxy     20   0   26420   6236   1396 S   0.0  0.0   0:00.14 storeid_file_re
32745 root       0 -20   18228   5852   3548 S   0.0  0.0   0:02.26 atop
root@squidbox:~# cat /proc/net/sockstat
sockets: used 15109
TCP: inuse 2337 orphan 879 tw 2073 alloc 15789 mem 767999
UDP: inuse 5 mem 5
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
root@squidbox:~# cat /proc/sys/net/ipv4/tcp_max_orphans
65536


Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Omid Kosari
I have deleted and recreate the request directory and see huge decrease in memory usage of the fetcher process .

Did i do the right thing ? Is there anything that should i do after a while ?
Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Eliezer Croitoru
Technically it depends on the GoLang Garbage collection.
I will try to upgrade the software to GoLang V 1.8.1 in the next days but no promises on a specific date.
The next article might help you:
https://www.howtogeek.com/howto/ubuntu/delete-files-older-than-x-days-on-linux/

You can try to use the atime and not the mtime.
In any case if you are running out of memory please dump the top -n1 into a file and then use some paste or attach the file so I would be able to read it since the lines are in a way not in a form it would be easy for me to read.
It is possible that some fetchers will consume lots of memory and some of the requests are indeed un-needed but... don’t delete them.
Try to archive them and only then remove from them some by their age or something similar.
Once you have the request you have the option to fetch files and since it's such a small thing(max 64k per request) it's better to save and archive first and later wonder if some file request is missing.

All The Bests,
Eliezer

* if you want me to test or analyze your archived requests archive them inside a xz and send them over to me.

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: [hidden email]



-----Original Message-----
From: squid-users [mailto:[hidden email]] On Behalf Of Omid Kosari
Sent: Wednesday, May 10, 2017 3:18 PM
To: [hidden email]
Subject: Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

I have deleted and recreate the request directory and see huge decrease in memory usage of the fetcher process .

Did i do the right thing ? Is there anything that should i do after a while ?



--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4682352.html
Sent from the Squid - Users mailing list archive at Nabble.com.
_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
[hidden email]
http://lists.squid-cache.org/listinfo/squid-users
Reply | Threaded
Open this post in threaded view
|

Re: Windows Updates a Caching Stub zone, A windows updates store.

Omid Kosari
Eliezer Croitoru wrote
You can try to use the atime and not the mtime.
Each time the fetcher script runs , all of request files will access and then atime will refreshed .
I think for "request" directory it should be "mtime" and for "body" directory it should be "atime" .

Eliezer Croitoru wrote
It is possible that some fetchers will consume lots of memory and some of the requests are indeed un-needed but... don’t delete them.
Try to archive them and only then remove from them some by their age or something similar.
Once you have the request you have the option to fetch files and since it's such a small thing(max 64k per request) it's better to save and archive first and later wonder if some file request is missing.
But currently there is more than 230000 files in old request directory . Maybe the garbage collector of GoLang will not release the memory after processing each file .


Eliezer Croitoru wrote
* if you want me to test or analyze your archived requests archive them inside a xz and send them over to me.
I have sent you the request directory in previous private email .

Thanks
12