Quantcast

anyone knows some info about youtube "range" parameter?

classic Classic list List threaded Threaded
28 messages Options
12
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

anyone knows some info about youtube "range" parameter?

Eliezer Croitoru-2
as for some people asking me recently about youtube cache i have checked
again and found that youtube changed their video uris and added an
argument called "range" that is managed by the youtube player.
the original url\uri dosnt include range but the youtube player is using
this argument to save bandwidth.

i can implement the cahing with ranges on nginx but i dont know yet the
way that range works.
it can be based on user bandwidth or "fixed" size of chunkes.

if someone up to the mission of analyzing it a bit more to understand it
so the "range" cache will be implemented i will be happy to get some
help with it.

Thanks,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer <at> ngtech.co.il
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Amos Jeffries
Administrator
On 25/04/2012 6:02 a.m., Eliezer Croitoru wrote:

> as for some people asking me recently about youtube cache i have
> checked again and found that youtube changed their video uris and
> added an argument called "range" that is managed by the youtube player.
> the original url\uri dosnt include range but the youtube player is
> using this argument to save bandwidth.
>
> i can implement the cahing with ranges on nginx but i dont know yet
> the way that range works.
> it can be based on user bandwidth or "fixed" size of chunkes.
>
> if someone up to the mission of analyzing it a bit more to understand
> it so the "range" cache will be implemented i will be happy to get
> some help with it.
>
> Thanks,
> Eliezer
>
>

I took a look at it a while back...

I got as far as determining that the "range" was roughly byte-ranges as
per the HTTP spec BUT (and this is a huge BUT). Each response was
prefixed with some form of file intro bytes. Meaning the rages were not
strictly sub-bytes of some original object. At this point there is no
way for Squid to correctly generate the intro bytes, or to merge/split
these "ranges" for servicing other clients.

When used the transfer is relatively efficient, so the impact of
bypassing the storeurl cache feature is not too bad. The other option is
to re-write the URL without range and simply reply with the whole video
regardless. It is a nasty mapping problem with bandwidth waste either way.

That was a year or two ago, so it may be worth re-investigating.

Amos
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Eliezer Croitoru-2
On 25/04/2012 06:02, Amos Jeffries wrote:

> On 25/04/2012 6:02 a.m., Eliezer Croitoru wrote:
>> as for some people asking me recently about youtube cache i have
>> checked again and found that youtube changed their video uris and
>> added an argument called "range" that is managed by the youtube player.
>> the original url\uri dosnt include range but the youtube player is
>> using this argument to save bandwidth.
>>
>> i can implement the cahing with ranges on nginx but i dont know yet
>> the way that range works.
>> it can be based on user bandwidth or "fixed" size of chunkes.
>>
>> if someone up to the mission of analyzing it a bit more to understand
>> it so the "range" cache will be implemented i will be happy to get
>> some help with it.
>>
>> Thanks,
>> Eliezer
>>
>>
>
> I took a look at it a while back...
>
> I got as far as determining that the "range" was roughly byte-ranges as
> per the HTTP spec BUT (and this is a huge BUT). Each response was
> prefixed with some form of file intro bytes. Meaning the rages were not
> strictly sub-bytes of some original object. At this point there is no
> way for Squid to correctly generate the intro bytes, or to merge/split
> these "ranges" for servicing other clients.
>
> When used the transfer is relatively efficient, so the impact of
> bypassing the storeurl cache feature is not too bad. The other option is
> to re-write the URL without range and simply reply with the whole video
> regardless. It is a nasty mapping problem with bandwidth waste either way.
>
they have changed something in the last month or so.
the was using a "begin"
and now they are usinn " "rang=13-X" 13 is the first..
i was thinking also on rewriting the address cause it works perfectly
with my testing.

will update more later.

Eliezer
> That was a year or two ago, so it may be worth re-investigating.
>
> Amos


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer <at> ngtech.co.il
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Ghassan
Hello,

As i remember I already discussed this subject before mentioning that
Youtube several months ago added a new variable/URI "RANGE". I tried
to deny all URLs that comes with "RANGE" to avoid presenting the error
at Youtube Player butb tried to investigate more and came with a
solution like that :

--------------------------------------------------------------------------------

                        # youtube 360p itag=34 ,480p itag=35 [ITAG/ID/RANGE]
 if (m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(itag=[0-9]*).*(id=[a-zA-Z0-9]*).*(range\=[0-9\-]*)/)
{
        print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/" . $3 . "&" .
$4 . "\n";

                        # youtube 360p itag=34 ,480p itag=35 [ID/ITAG/RANGE]
} elsif (m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(id=[a-zA-Z0-9]*).*(itag=[0-9]*).*(range\=[0-9\-]*)/)
{
        print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/" . $2 . "&" .
$4 . "\n";

                        # youtube 360p itag=34 ,480p itag=35 [RANGE/ITAG/ID]
} elsif (m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(range\=[0-9\-]*).*(itag=[0-9]*).*(id=[a-zA-Z0-9]*)/)
{
        print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/" . $4 . "&" .
$2 . "\n";
--------------------------------------------------------------------------------------

I already discovered that by rewriting them and save them as
videplayback?id=0000000&range=00-000000 would solve the problem but
the thing is the cache folder would be increased faster because we are
not only saving one file as we are saving multiple files for one ID!.

AS for me , it saves alot of bandwidth but bigger cache . If you check
and analyze it more then you will notice same ID or same videop while
watching the link changes for example :

It starts [ITAG/ID/RANGE] then changes to [ID/ITAG/RANGE] and finally
to [RANGE/ITAG/ID] so with my script you can capture the whole
places!.


Ghassan

On 4/25/12, Eliezer Croitoru <[hidden email]> wrote:

> On 25/04/2012 06:02, Amos Jeffries wrote:
>> On 25/04/2012 6:02 a.m., Eliezer Croitoru wrote:
>>> as for some people asking me recently about youtube cache i have
>>> checked again and found that youtube changed their video uris and
>>> added an argument called "range" that is managed by the youtube player.
>>> the original url\uri dosnt include range but the youtube player is
>>> using this argument to save bandwidth.
>>>
>>> i can implement the cahing with ranges on nginx but i dont know yet
>>> the way that range works.
>>> it can be based on user bandwidth or "fixed" size of chunkes.
>>>
>>> if someone up to the mission of analyzing it a bit more to understand
>>> it so the "range" cache will be implemented i will be happy to get
>>> some help with it.
>>>
>>> Thanks,
>>> Eliezer
>>>
>>>
>>
>> I took a look at it a while back...
>>
>> I got as far as determining that the "range" was roughly byte-ranges as
>> per the HTTP spec BUT (and this is a huge BUT). Each response was
>> prefixed with some form of file intro bytes. Meaning the rages were not
>> strictly sub-bytes of some original object. At this point there is no
>> way for Squid to correctly generate the intro bytes, or to merge/split
>> these "ranges" for servicing other clients.
>>
>> When used the transfer is relatively efficient, so the impact of
>> bypassing the storeurl cache feature is not too bad. The other option is
>> to re-write the URL without range and simply reply with the whole video
>> regardless. It is a nasty mapping problem with bandwidth waste either way.
>>
> they have changed something in the last month or so.
> the was using a "begin"
> and now they are usinn " "rang=13-X" 13 is the first..
> i was thinking also on rewriting the address cause it works perfectly
> with my testing.
>
> will update more later.
>
> Eliezer
>> That was a year or two ago, so it may be worth re-investigating.
>>
>> Amos
>
>
> --
> Eliezer Croitoru
> https://www1.ngtech.co.il
> IT consulting for Nonprofit organizations
> eliezer <at> ngtech.co.il
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Hasanen AL-Bana
wouldn't be better if we save the video chunks ? youtube is streaming
files with 1.7MB flv chunks, youtube flash player knows how to merge
them and play them....so the range start and end will alaways be the
same for the same video as long as user doesn't fast forward it or do
something nasty...even in that case , squid will just cache that
chunk...that is possible by rewriting the STORE_URL and including the
range start & end

On Wed, Apr 25, 2012 at 8:39 PM, Ghassan Gharabli
<[hidden email]> wrote:

>
> Hello,
>
> As i remember I already discussed this subject before mentioning that
> Youtube several months ago added a new variable/URI "RANGE". I tried
> to deny all URLs that comes with "RANGE" to avoid presenting the error
> at Youtube Player butb tried to investigate more and came with a
> solution like that :
>
> --------------------------------------------------------------------------------
>
>                        # youtube 360p itag=34 ,480p itag=35 [ITAG/ID/RANGE]
>  if (m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(itag=[0-9]*).*(id=[a-zA-Z0-9]*).*(range\=[0-9\-]*)/)
> {
>        print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/" . $3 . "&" .
> $4 . "\n";
>
>                        # youtube 360p itag=34 ,480p itag=35 [ID/ITAG/RANGE]
> } elsif (m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(id=[a-zA-Z0-9]*).*(itag=[0-9]*).*(range\=[0-9\-]*)/)
> {
>        print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/" . $2 . "&" .
> $4 . "\n";
>
>                        # youtube 360p itag=34 ,480p itag=35 [RANGE/ITAG/ID]
> } elsif (m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(range\=[0-9\-]*).*(itag=[0-9]*).*(id=[a-zA-Z0-9]*)/)
> {
>        print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/" . $4 . "&" .
> $2 . "\n";
> --------------------------------------------------------------------------------------
>
> I already discovered that by rewriting them and save them as
> videplayback?id=0000000&range=00-000000 would solve the problem but
> the thing is the cache folder would be increased faster because we are
> not only saving one file as we are saving multiple files for one ID!.
>
> AS for me , it saves alot of bandwidth but bigger cache . If you check
> and analyze it more then you will notice same ID or same videop while
> watching the link changes for example :
>
> It starts [ITAG/ID/RANGE] then changes to [ID/ITAG/RANGE] and finally
> to [RANGE/ITAG/ID] so with my script you can capture the whole
> places!.
>
>
> Ghassan
>
> On 4/25/12, Eliezer Croitoru <[hidden email]> wrote:
> > On 25/04/2012 06:02, Amos Jeffries wrote:
> >> On 25/04/2012 6:02 a.m., Eliezer Croitoru wrote:
> >>> as for some people asking me recently about youtube cache i have
> >>> checked again and found that youtube changed their video uris and
> >>> added an argument called "range" that is managed by the youtube player.
> >>> the original url\uri dosnt include range but the youtube player is
> >>> using this argument to save bandwidth.
> >>>
> >>> i can implement the cahing with ranges on nginx but i dont know yet
> >>> the way that range works.
> >>> it can be based on user bandwidth or "fixed" size of chunkes.
> >>>
> >>> if someone up to the mission of analyzing it a bit more to understand
> >>> it so the "range" cache will be implemented i will be happy to get
> >>> some help with it.
> >>>
> >>> Thanks,
> >>> Eliezer
> >>>
> >>>
> >>
> >> I took a look at it a while back...
> >>
> >> I got as far as determining that the "range" was roughly byte-ranges as
> >> per the HTTP spec BUT (and this is a huge BUT). Each response was
> >> prefixed with some form of file intro bytes. Meaning the rages were not
> >> strictly sub-bytes of some original object. At this point there is no
> >> way for Squid to correctly generate the intro bytes, or to merge/split
> >> these "ranges" for servicing other clients.
> >>
> >> When used the transfer is relatively efficient, so the impact of
> >> bypassing the storeurl cache feature is not too bad. The other option is
> >> to re-write the URL without range and simply reply with the whole video
> >> regardless. It is a nasty mapping problem with bandwidth waste either way.
> >>
> > they have changed something in the last month or so.
> > the was using a "begin"
> > and now they are usinn " "rang=13-X" 13 is the first..
> > i was thinking also on rewriting the address cause it works perfectly
> > with my testing.
> >
> > will update more later.
> >
> > Eliezer
> >> That was a year or two ago, so it may be worth re-investigating.
> >>
> >> Amos
> >
> >
> > --
> > Eliezer Croitoru
> > https://www1.ngtech.co.il
> > IT consulting for Nonprofit organizations
> > eliezer <at> ngtech.co.il
> >
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Eliezer Croitoru-2
In reply to this post by Ghassan
On 25/04/2012 20:39, Ghassan Gharabli wrote:
> Hello,
>
> As i remember I already discussed this subject before mentioning that
> Youtube several months ago added a new variable/URI "RANGE". I tried
> to deny all URLs that comes with "RANGE" to avoid presenting the error
> at Youtube Player butb tried to investigate more and came with a
> solution like that :
>
nice solution!
i have used quiet long time the squid on any version with nginx as
cache_peer so i dont deny anything\any url.
the main thing for me is that the used will not feel in anyway that
there is a cache mechanism!!!
i havnt used store_url_rewrite for a long time because nginx has some
really nice features that helps cache a lot of sites with dynamic content.
i added to nginx the "range" argument pretty easily but, the main
problem is that the "204" (end of data http GET content) from a reason
wont be sent to nginx while playing the file and the youtube player will
get to a point it will not "preload" the next "range" until it will get
to the end of the "range".
so in this case i think about two options to solve it:
1. make nginx do a http 1.0 request that solves the problem of getting
stuck in the end of each chunk(1.7 mb).
2. rewrite any youtube video url and strip the "range" paramater.
3. use store_url_rewrite that will store id itag and range.

problems:
1. dont know how can it done on nginx.
2. really the best solution and will avoid any complication of the
"range" thing but will cause a lot of users that will jump into some
part in the video to return them to the first second of the video.
this is one hell of a major problem for users!!
with the "begin" range that youtube was using you could just pass the
request to youtube without any caching involved.
so another solution is to cache the whole video for requests with range
of "13-XX" which means that it will start to download the whole video
for a "start of the video request" and all other chunks will be passed
without any cache.
the problem is that i didnt got the time yet to check what is the result
of rewriting the basic "range" that starts with 13 and then to see how
youtube player will accept the response of the "full" file without range.
3. this is one of the best choices but didnt have the to try it yet.




> --------------------------------------------------------------------------------
>
> # youtube 360p itag=34 ,480p itag=35 [ITAG/ID/RANGE]
>   if (m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(itag=[0-9]*).*(id=[a-zA-Z0-9]*).*(range\=[0-9\-]*)/)
> {
> print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/" . $3 . "&" .
> $4 . "\n";
>
> # youtube 360p itag=34 ,480p itag=35 [ID/ITAG/RANGE]
> } elsif (m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(id=[a-zA-Z0-9]*).*(itag=[0-9]*).*(range\=[0-9\-]*)/)
> {
> print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/" . $2 . "&" .
> $4 . "\n";
>
> # youtube 360p itag=34 ,480p itag=35 [RANGE/ITAG/ID]
> } elsif (m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(range\=[0-9\-]*).*(itag=[0-9]*).*(id=[a-zA-Z0-9]*)/)
> {
> print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/" . $4 . "&" .
> $2 . "\n";
> --------------------------------------------------------------------------------------
>
> I already discovered that by rewriting them and save them as
> videplayback?id=0000000&range=00-000000 would solve the problem but
> the thing is the cache folder would be increased faster because we are
> not only saving one file as we are saving multiple files for one ID!.
>
> AS for me , it saves alot of bandwidth but bigger cache . If you check
> and analyze it more then you will notice same ID or same videop while
> watching the link changes for example :
>
> It starts [ITAG/ID/RANGE] then changes to [ID/ITAG/RANGE] and finally
> to [RANGE/ITAG/ID] so with my script you can capture the whole
> places!.
this has pretty simple solution and it's to use one store_url syntax and
order.
i dont know why you didnt used the itag at all on your store_url_rewriting.
you should store "itag" "id" and "range" in specific order to maintain
"consistent" cache syntax.

about the range themselves i did found that it has specific advancement
order which makes it pretty easy to cache and the parts that are not in
the regular playback 1.7 mb chunk order is not that big because most of
the users are using the basic playback without any "jumping" in the middle.
if you have lost some(double your users that jumps randomly in a video)
disk space for bandwidth saving in most of the times it worth it.

about the id itag and range order in the url, because i was using nginx
i didnt have this problem at all.
ngnix uses the info from the url args simple and smoothly.
generally the store_url_rewrite has much more potential to be cache
effective then nginx proxy_store as the ngnix proxy_store is a permanent
store mechanism without any time limit calculation.
as for now nginx has the option to integrate with perl that can be used
for many things such as requests manipulation.

another option i was thinking is the icap to rewrite the url or do some
other stuff.

but as for until now nginx was fine i was working on it.

Regards,
Eliezer

>
>
> Ghassan
>
> On 4/25/12, Eliezer Croitoru<[hidden email]>  wrote:
<snip>


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer <at> ngtech.co.il
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Eliezer Croitoru-2
In reply to this post by Hasanen AL-Bana
On 25/04/2012 20:48, Hasanen AL-Bana wrote:
> wouldn't be better if we save the video chunks ? youtube is streaming
> files with 1.7MB flv chunks, youtube flash player knows how to merge
> them and play them....so the range start and end will alaways be the
> same for the same video as long as user doesn't fast forward it or do
> something nasty...even in that case , squid will just cache that
> chunk...that is possible by rewriting the STORE_URL and including the
> range start&  end

as i already answered a detailed answer to Ghassan.
i think that caching the chunks if possible is pretty good thing.
i tried it with nginx but havnt got the option to try it with
store_url_rewrite.

hope to try it somewhere next week.
Regards,
Eliezer

>
> On Wed, Apr 25, 2012 at 8:39 PM, Ghassan Gharabli
> <[hidden email]>  wrote:
>>
>> Hello,
>>
>> As i remember I already discussed this subject before mentioning that
>> Youtube several months ago added a new variable/URI "RANGE". I tried
>> to deny all URLs that comes with "RANGE" to avoid presenting the error
>> at Youtube Player butb tried to investigate more and came with a
>> solution like that :
>>
>> --------------------------------------------------------------------------------
>>
>>                         # youtube 360p itag=34 ,480p itag=35 [ITAG/ID/RANGE]
>>   if (m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(itag=[0-9]*).*(id=[a-zA-Z0-9]*).*(range\=[0-9\-]*)/)
>> {
>>         print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/" . $3 . "&" .
>> $4 . "\n";
>>
>>                         # youtube 360p itag=34 ,480p itag=35 [ID/ITAG/RANGE]
>> } elsif (m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(id=[a-zA-Z0-9]*).*(itag=[0-9]*).*(range\=[0-9\-]*)/)
>> {
>>         print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/" . $2 . "&" .
>> $4 . "\n";
>>
>>                         # youtube 360p itag=34 ,480p itag=35 [RANGE/ITAG/ID]
>> } elsif (m/^http:\/\/([0-9.]{4}|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com)\/.*(range\=[0-9\-]*).*(itag=[0-9]*).*(id=[a-zA-Z0-9]*)/)
>> {
>>         print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/" . $4 . "&" .
>> $2 . "\n";
>> --------------------------------------------------------------------------------------
>>
>> I already discovered that by rewriting them and save them as
>> videplayback?id=0000000&range=00-000000 would solve the problem but
>> the thing is the cache folder would be increased faster because we are
>> not only saving one file as we are saving multiple files for one ID!.
>>
>> AS for me , it saves alot of bandwidth but bigger cache . If you check
>> and analyze it more then you will notice same ID or same videop while
>> watching the link changes for example :
>>
>> It starts [ITAG/ID/RANGE] then changes to [ID/ITAG/RANGE] and finally
>> to [RANGE/ITAG/ID] so with my script you can capture the whole
>> places!.
>>
>>
>> Ghassan
>>
>> On 4/25/12, Eliezer Croitoru<[hidden email]>  wrote:
<SNIP>


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer <at> ngtech.co.il
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Christian Loth
Hi,

On Thursday 26 April 2012 03:44:58 Eliezer Croitoru wrote:
> as i already answered a detailed answer to Ghassan.
> i think that caching the chunks if possible is pretty good thing.
> i tried it with nginx but havnt got the option to try it with
> store_url_rewrite.

To maybe save you some work, here's how I did it. First of all, I use nginx as
a cache-peer - so no URL rewriting script. Excerpt of my squid.conf:


acl youtube_videos url_regex -i ^http://[^/]+(\.youtube\.com|\.googlevideo\.com|\.video\.google\.com)/(videoplayback|get_video|videodownload)\?
acl range_request req_header Range .
acl begin_param url_regex -i [?&]begin=
acl id_param url_regex -i [?&]id=
acl itag_param url_regex -i [?&]itag=
acl sver3_param url_regex -i [?&]sver=3
cache_peer 127.0.0.1 parent 8081 0 proxy-only no-query connect-timeout=5 no-digest
cache_peer_access 127.0.0.1 allow youtube_videos id_param itag_param sver3_param !begin_param !range_request
cache_peer_access 127.0.0.1 deny all

Small note: the range request in this configuration that is denied is the
HTTP-Range-Header, not the range URL parameter! Nginx is of course
running on port 8081 on the same server.

The important configuration directive in nginx is as follows:

server {
        listen       127.0.0.1:8081;

        location / {
                root   /var/cache/proxy/nginx/files;
                try_files "/id=$arg_id.itag=$arg_itag.range=$arg_range.algo=$arg_algorithm" @proxy_youtube;
        }

        location @proxy_youtube {
                resolver 134.99.128.2;
                proxy_pass http://$host$request_uri;
                proxy_temp_path "/var/cache/proxy/nginx/tmp";
                proxy_ignore_client_abort off;
                proxy_store "/var/cache/proxy/nginx/files/id=$arg_id.itag=$arg_itag.range=$arg_range.algo=$arg_algorithm";
                proxy_set_header X-YouTube-Cache "[hidden email]";
                proxy_set_header Accept "video/*";
                proxy_set_header User-Agent "YouTube Cache (nginx)";
                proxy_set_header Accept-Encoding "";
                proxy_set_header Accept-Language "";
                proxy_set_header Accept-Charset "";
                proxy_set_header Cache-Control "";
        }
}

This way, the setup works. Perhaps anyone of you even has a piece of advice for
improving it? E.g. I'm still looking for a way to log nginx proxy_store hits and
misses...?

Best regards,
- Christian Loth

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

johan firdianto
This range behaviour related with error occured in youtube flash player ?
i'm using url rewrite if any range parameter in uri videos, i stripp off.
i can save the whole file of video to directory, by parsing store.log
and retrieve the same video using curl/wget.
The problem, when the same video is requested, and my rewrite script
issues 302  to the file that directory located (under web directory
using nginx).
error occured message appears in flash player.
I refresh many times, still error occured, unless i choose different
quality (example: 480p or 240p).
any suggest ?


On Thu, Apr 26, 2012 at 2:41 PM, Christian Loth
<[hidden email]> wrote:

> Hi,
>
> On Thursday 26 April 2012 03:44:58 Eliezer Croitoru wrote:
>> as i already answered a detailed answer to Ghassan.
>> i think that caching the chunks if possible is pretty good thing.
>> i tried it with nginx but havnt got the option to try it with
>> store_url_rewrite.
>
> To maybe save you some work, here's how I did it. First of all, I use nginx as
> a cache-peer - so no URL rewriting script. Excerpt of my squid.conf:
>
>
> acl youtube_videos url_regex -i ^http://[^/]+(\.youtube\.com|\.googlevideo\.com|\.video\.google\.com)/(videoplayback|get_video|videodownload)\?
> acl range_request req_header Range .
> acl begin_param url_regex -i [?&]begin=
> acl id_param url_regex -i [?&]id=
> acl itag_param url_regex -i [?&]itag=
> acl sver3_param url_regex -i [?&]sver=3
> cache_peer 127.0.0.1 parent 8081 0 proxy-only no-query connect-timeout=5 no-digest
> cache_peer_access 127.0.0.1 allow youtube_videos id_param itag_param sver3_param !begin_param !range_request
> cache_peer_access 127.0.0.1 deny all
>
> Small note: the range request in this configuration that is denied is the
> HTTP-Range-Header, not the range URL parameter! Nginx is of course
> running on port 8081 on the same server.
>
> The important configuration directive in nginx is as follows:
>
> server {
>        listen       127.0.0.1:8081;
>
>        location / {
>                root   /var/cache/proxy/nginx/files;
>                try_files "/id=$arg_id.itag=$arg_itag.range=$arg_range.algo=$arg_algorithm" @proxy_youtube;
>        }
>
>        location @proxy_youtube {
>                resolver 134.99.128.2;
>                proxy_pass http://$host$request_uri;
>                proxy_temp_path "/var/cache/proxy/nginx/tmp";
>                proxy_ignore_client_abort off;
>                proxy_store "/var/cache/proxy/nginx/files/id=$arg_id.itag=$arg_itag.range=$arg_range.algo=$arg_algorithm";
>                proxy_set_header X-YouTube-Cache "[hidden email]";
>                proxy_set_header Accept "video/*";
>                proxy_set_header User-Agent "YouTube Cache (nginx)";
>                proxy_set_header Accept-Encoding "";
>                proxy_set_header Accept-Language "";
>                proxy_set_header Accept-Charset "";
>                proxy_set_header Cache-Control "";
>        }
> }
>
> This way, the setup works. Perhaps anyone of you even has a piece of advice for
> improving it? E.g. I'm still looking for a way to log nginx proxy_store hits and
> misses...?
>
> Best regards,
> - Christian Loth
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Christian Loth
Hello,

On Thursday 26 April 2012 10:15:33 johan firdianto wrote:

> This range behaviour related with error occured in youtube flash player ?
> i'm using url rewrite if any range parameter in uri videos, i stripp off.
> i can save the whole file of video to directory, by parsing store.log
> and retrieve the same video using curl/wget.
> The problem, when the same video is requested, and my rewrite script
> issues 302  to the file that directory located (under web directory
> using nginx).
> error occured message appears in flash player.
> I refresh many times, still error occured, unless i choose different
> quality (example: 480p or 240p).
> any suggest ?

Errors during video playback were the reason I had to do research about range
in the first place. Simply accepting the range-parameter didn't work for
obvious reasons: because the cached object was constantly overwritten by
subsequent chunks.

Stripping the range-parameter didn't work well either for me, because
youtube's flash player expected a chunk but got the whole file. This also
resulted in an error for me.

The only solution to work was adding the range-parameter to the filenames
of the stored files as seen in my nginx configuration in my previous e-mail.

And this solution works currently, and because the range-parameters are
predictable with simple playback, we also experience a caching and thus
bandwidth saving effect. Bandwidth saving is seen by two effects: a) serving
a user from cache, and b) if a user interrupts video playback (clicking from
one video to another) we only have downloaded a certain number of chunks
and not the whole video.

HTH,
- Christian Loth


>
>
> On Thu, Apr 26, 2012 at 2:41 PM, Christian Loth
>
> <[hidden email]> wrote:
> > Hi,
> >
> > On Thursday 26 April 2012 03:44:58 Eliezer Croitoru wrote:
> >> as i already answered a detailed answer to Ghassan.
> >> i think that caching the chunks if possible is pretty good thing.
> >> i tried it with nginx but havnt got the option to try it with
> >> store_url_rewrite.
> >
> > To maybe save you some work, here's how I did it. First of all, I use
> > nginx as a cache-peer - so no URL rewriting script. Excerpt of my
> > squid.conf:
> >
> >
> > acl youtube_videos url_regex -i
> > ^http://[^/]+(\.youtube\.com|\.googlevideo\.com|\.video\.google\.com)/(vi
> >deoplayback|get_video|videodownload)\? acl range_request req_header Range
> > .
> > acl begin_param url_regex -i [?&]begin=
> > acl id_param url_regex -i [?&]id=
> > acl itag_param url_regex -i [?&]itag=
> > acl sver3_param url_regex -i [?&]sver=3
> > cache_peer 127.0.0.1 parent 8081 0 proxy-only no-query connect-timeout=5
> > no-digest cache_peer_access 127.0.0.1 allow youtube_videos id_param
> > itag_param sver3_param !begin_param !range_request cache_peer_access
> > 127.0.0.1 deny all
> >
> > Small note: the range request in this configuration that is denied is the
> > HTTP-Range-Header, not the range URL parameter! Nginx is of course
> > running on port 8081 on the same server.
> >
> > The important configuration directive in nginx is as follows:
> >
> > server {
> >        listen       127.0.0.1:8081;
> >
> >        location / {
> >                root   /var/cache/proxy/nginx/files;
> >                try_files
> > "/id=$arg_id.itag=$arg_itag.range=$arg_range.algo=$arg_algorithm"
> > @proxy_youtube; }
> >
> >        location @proxy_youtube {
> >                resolver 134.99.128.2;
> >                proxy_pass http://$host$request_uri;
> >                proxy_temp_path "/var/cache/proxy/nginx/tmp";
> >                proxy_ignore_client_abort off;
> >                proxy_store
> > "/var/cache/proxy/nginx/files/id=$arg_id.itag=$arg_itag.range=$arg_range.
> >algo=$arg_algorithm"; proxy_set_header X-YouTube-Cache
> > "[hidden email]"; proxy_set_header Accept "video/*";
> >                proxy_set_header User-Agent "YouTube Cache (nginx)";
> >                proxy_set_header Accept-Encoding "";
> >                proxy_set_header Accept-Language "";
> >                proxy_set_header Accept-Charset "";
> >                proxy_set_header Cache-Control "";
> >        }
> > }
> >
> > This way, the setup works. Perhaps anyone of you even has a piece of
> > advice for improving it? E.g. I'm still looking for a way to log nginx
> > proxy_store hits and misses...?
> >
> > Best regards,
> > - Christian Loth
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Eliezer Croitoru-2
In reply to this post by johan firdianto
On 26/04/2012 11:15, johan firdianto wrote:
> This range behaviour related with error occured in youtube flash player ?
> i'm using url rewrite if any range parameter in uri videos, i stripp off.
> i can save the whole file of video to directory, by parsing store.log
> and retrieve the same video using curl/wget.
> The problem, when the same video is requested, and my rewrite script
> issues 302  to the file that directory located (under web directory
> using nginx).
i think that the worst way to redirect into other cache object is to use
a 302 for a cache applications that has crossdomain restrictions.
in any way i have tried redirect with a 302 caused a problem for this
kind of websites\apps.
the best way is to use it will be a "transparent" redirection using
squid capabilities.
i will try to build some store_url_rewriter with range support and see
if i get the same bad results i was having with nginx.

i will try to use ruby or perl for it.
i have good experience with ruby and perl so i will try them both.

Regards,
Eliezer

> error occured message appears in flash player.
> I refresh many times, still error occured, unless i choose different
> quality (example: 480p or 240p).
> any suggest ?
>
>
> On Thu, Apr 26, 2012 at 2:41 PM, Christian Loth
> <[hidden email]>  wrote:
<SNIP>

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer <at> ngtech.co.il
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Eliezer Croitoru-2
In reply to this post by Hasanen AL-Bana
On 25/04/2012 20:48, Hasanen AL-Bana wrote:

> wouldn't be better if we save the video chunks ? youtube is streaming
> files with 1.7MB flv chunks, youtube flash player knows how to merge
> them and play them....so the range start and end will alaways be the
> same for the same video as long as user doesn't fast forward it or do
> something nasty...even in that case , squid will just cache that
> chunk...that is possible by rewriting the STORE_URL and including the
> range start&  end
>
> On Wed, Apr 25, 2012 at 8:39 PM, Ghassan Gharabli
> <[hidden email]>  wrote:
<SNIP>

i have written a small ruby store_url_rewrite that works with range
argument in the url.
(on the bottom of this mail)

it's written in ruby and i took some of andre work at
http://youtube-cache.googlecode.com

it's not such a fancy script and ment only for this specific youtube
problem.

i know that youtube didnt changed the this range behavior for the whole
globe cause as for now i'm working from a remote location that still has
no "range" at all in the url.
so in the same country you can get two different url patterns.

this script is not cpu friendly (uses more the same amount of regex
lookups always) but it's not what will bring your server down!!!

this is only a prototype and if anyone wants to add some more domains
and patterns i will be more then glad to make this script better then
it's now.

this is one hell of a regex nasty script and i could have used the uri
and cgi libs in order to make the script more user friendly but i choose
to just build the script skeleton and move on from there using the basic
method and classes of ruby.

the idea of this script is to extract each of the arguments such as id
itag and ragne one by one and to not use one regex to extract them all
because there are couple of url structures being used by youtube.

if someone can help me to reorganize this script to allow it to be more
flexible for other sites with numbered cases per
site\domain\url_structure i will be happy to get any help i can.

planned for now to be added into this scripts are:
source forge catch all download mirrors into one object
imdb HQ (480P and up) videos
vimeo videos

if more then just one man will want:
bliptv
some of facebook videos
some other images storage sites.

if you want me to add anything to my "try to cache" list i will be help
to hear from you on my e-mail.

Regards,
Eliezer


##code start##
#!/usr/bin/ruby
require "syslog"

class SquidRequest
         attr_accessor :url, :user
         attr_reader :client_ip, :method

         def method=(s)
                 @method = s.downcase
         end

         def client_ip=(s)
                 @client_ip = s.split('/').first
         end
end

def read_requests
         # URL <SP> client_ip "/" fqdn <SP> user <SP> method [<SP>
kvpairs]<NL>
         STDIN.each_line do |ln|
                 r = SquidRequest.new
                 r.url, r.client_ip, r.user, r.method, *dummy =
ln.rstrip.split(' ')
                 (STDOUT << "#{yield r}\n").flush
         end
end

def log(msg)
         Syslog.log(Syslog::LOG_ERR, "%s", msg)
end

def main
         Syslog.open('nginx.rb', Syslog::LOG_PID)
         log("Started")

         read_requests do |r|
idrx = /.*(id\=)([A-Za-z0-9]*).*/
itagrx = /.*(itag\=)([0-9]*).*/
rangerx = /.*(range\=)([0-9\-]*).*/

newurl = "http://video-srv.youtube.com.SQUIDINTERNAL/id_" +
r.url.match(idrx)[2] + "_itag_" + r.url.match(itagrx)[2] + "_range_" +
r.url.match(rangerx)[2]

         log("YouTube Video [#{newurl}].")

                 newurl
         end
end

main
##code end#


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer <at> ngtech.co.il
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Hasanen AL-Bana
On Fri, Apr 27, 2012 at 7:43 AM, Eliezer Croitoru <[hidden email]> wrote:

> On 25/04/2012 20:48, Hasanen AL-Bana wrote:
>>
>> wouldn't be better if we save the video chunks ? youtube is streaming
>> files with 1.7MB flv chunks, youtube flash player knows how to merge
>> them and play them....so the range start and end will alaways be the
>> same for the same video as long as user doesn't fast forward it or do
>> something nasty...even in that case , squid will just cache that
>> chunk...that is possible by rewriting the STORE_URL and including the
>> range start&  end
>>
>>
>> On Wed, Apr 25, 2012 at 8:39 PM, Ghassan Gharabli
>> <[hidden email]>  wrote:
>
> <SNIP>
>
> i have written a small ruby store_url_rewrite that works with range argument
> in the url.
> (on the bottom of this mail)
>
> it's written in ruby and i took some of andre work at
> http://youtube-cache.googlecode.com
>
> it's not such a fancy script and ment only for this specific youtube
> problem.
>
> i know that youtube didnt changed the this range behavior for the whole
> globe cause as for now i'm working from a remote location that still has no
> "range" at all in the url.
> so in the same country you can get two different url patterns.
>
> this script is not cpu friendly (uses more the same amount of regex lookups
> always) but it's not what will bring your server down!!!

That is why I am going to write it in perl, in my server I might need
to run more than 40 instances on the script and perl is like the
fastest thing I have ever tested

>
> this is only a prototype and if anyone wants to add some more domains and
> patterns i will be more then glad to make this script better then it's now.
>
> this is one hell of a regex nasty script and i could have used the uri and
> cgi libs in order to make the script more user friendly but i choose to just
> build the script skeleton and move on from there using the basic method and
> classes of ruby.
>
> the idea of this script is to extract each of the arguments such as id itag
> and ragne one by one and to not use one regex to extract them all because
> there are couple of url structures being used by youtube.
>
> if someone can help me to reorganize this script to allow it to be more
> flexible for other sites with numbered cases per site\domain\url_structure i
> will be happy to get any help i can.
>
> planned for now to be added into this scripts are:
> source forge catch all download mirrors into one object
> imdb HQ (480P and up) videos
> vimeo videos
>
> if more then just one man will want:
> bliptv
> some of facebook videos
> some other images storage sites.
>
> if you want me to add anything to my "try to cache" list i will be help to
> hear from you on my e-mail.
>
> Regards,
> Eliezer
>
>
> ##code start##
> #!/usr/bin/ruby
> require "syslog"
>
> class SquidRequest
>        attr_accessor :url, :user
>        attr_reader :client_ip, :method
>
>        def method=(s)
>                @method = s.downcase
>        end
>
>        def client_ip=(s)
>                @client_ip = s.split('/').first
>        end
> end
>
> def read_requests
>        # URL <SP> client_ip "/" fqdn <SP> user <SP> method [<SP>
> kvpairs]<NL>
>        STDIN.each_line do |ln|
>                r = SquidRequest.new
>                r.url, r.client_ip, r.user, r.method, *dummy =
> ln.rstrip.split(' ')
>                (STDOUT << "#{yield r}\n").flush
>        end
> end
>
> def log(msg)
>        Syslog.log(Syslog::LOG_ERR, "%s", msg)
> end
>
> def main
>        Syslog.open('nginx.rb', Syslog::LOG_PID)
>        log("Started")
>
>        read_requests do |r|
> idrx = /.*(id\=)([A-Za-z0-9]*).*/
> itagrx = /.*(itag\=)([0-9]*).*/
> rangerx = /.*(range\=)([0-9\-]*).*/
>
> newurl = "http://video-srv.youtube.com.SQUIDINTERNAL/id_" +
> r.url.match(idrx)[2] + "_itag_" + r.url.match(itagrx)[2] + "_range_" +
> r.url.match(rangerx)[2]
>
>        log("YouTube Video [#{newurl}].")
>
>                newurl
>        end
> end
>
> main
> ##code end#
>
>
>
> --
> Eliezer Croitoru
> https://www1.ngtech.co.il
> IT consulting for Nonprofit organizations
> eliezer <at> ngtech.co.il
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Eliezer Croitoru-2
On 27/04/2012 09:52, Hasanen AL-Bana wrote:

> On Fri, Apr 27, 2012 at 7:43 AM, Eliezer Croitoru<[hidden email]>  wrote:
>> On 25/04/2012 20:48, Hasanen AL-Bana wrote:
>>>
>>> wouldn't be better if we save the video chunks ? youtube is streaming
>>> files with 1.7MB flv chunks, youtube flash player knows how to merge
>>> them and play them....so the range start and end will alaways be the
>>> same for the same video as long as user doesn't fast forward it or do
>>> something nasty...even in that case , squid will just cache that
>>> chunk...that is possible by rewriting the STORE_URL and including the
>>> range start&    end
>>>
>>>
>>> On Wed, Apr 25, 2012 at 8:39 PM, Ghassan Gharabli
>>> <[hidden email]>    wrote:
>>
>> <SNIP>
>>
>> i have written a small ruby store_url_rewrite that works with range argument
>> in the url.
>> (on the bottom of this mail)
>>
>> it's written in ruby and i took some of andre work at
>> http://youtube-cache.googlecode.com
>>
>> it's not such a fancy script and ment only for this specific youtube
>> problem.
>>
>> i know that youtube didnt changed the this range behavior for the whole
>> globe cause as for now i'm working from a remote location that still has no
>> "range" at all in the url.
>> so in the same country you can get two different url patterns.
>>
>> this script is not cpu friendly (uses more the same amount of regex lookups
>> always) but it's not what will bring your server down!!!
>
> That is why I am going to write it in perl, in my server I might need
> to run more than 40 instances on the script and perl is like the
> fastest thing I have ever tested
i have tried couple of languages to do almost the same thing.
what i do want is that the code will be readable and efficient.
i have used until now ruby perl python and JAVA for this specific task
ruby was fast and usable but JAVA was much more superior to all the
others combining most of what i needed.
regex on JAVA was so different then perl and the others so i used the
basic string classes of JAVA to implement these features.

i hope to see your perl code.

by the way 40 instances are not really needed for most of the servers i
have seen until now.
20 should be more then you need.

what is the "size" of this server? req per sec? bandwidth?cpu? ram?
cache space?

Regards,
Eliezer

>
>>
>> this is only a prototype and if anyone wants to add some more domains and
>> patterns i will be more then glad to make this script better then it's now.
>>
>> this is one hell of a regex nasty script and i could have used the uri and
>> cgi libs in order to make the script more user friendly but i choose to just
>> build the script skeleton and move on from there using the basic method and
>> classes of ruby.
>>
>> the idea of this script is to extract each of the arguments such as id itag
>> and ragne one by one and to not use one regex to extract them all because
>> there are couple of url structures being used by youtube.
>>
>> if someone can help me to reorganize this script to allow it to be more
>> flexible for other sites with numbered cases per site\domain\url_structure i
>> will be happy to get any help i can.
>>
>> planned for now to be added into this scripts are:
>> source forge catch all download mirrors into one object
>> imdb HQ (480P and up) videos
>> vimeo videos
>>
>> if more then just one man will want:
>> bliptv
>> some of facebook videos
>> some other images storage sites.
>>
>> if you want me to add anything to my "try to cache" list i will be help to
>> hear from you on my e-mail.
>>
>> Regards,
>> Eliezer
>>
>>
>> ##code start##
>> #!/usr/bin/ruby
>> require "syslog"
>>
>> class SquidRequest
>>         attr_accessor :url, :user
>>         attr_reader :client_ip, :method
>>
>>         def method=(s)
>>                 @method = s.downcase
>>         end
>>
>>         def client_ip=(s)
>>                 @client_ip = s.split('/').first
>>         end
>> end
>>
>> def read_requests
>>         # URL<SP>  client_ip "/" fqdn<SP>  user<SP>  method [<SP>
>> kvpairs]<NL>
>>         STDIN.each_line do |ln|
>>                 r = SquidRequest.new
>>                 r.url, r.client_ip, r.user, r.method, *dummy =
>> ln.rstrip.split(' ')
>>                 (STDOUT<<  "#{yield r}\n").flush
>>         end
>> end
>>
>> def log(msg)
>>         Syslog.log(Syslog::LOG_ERR, "%s", msg)
>> end
>>
>> def main
>>         Syslog.open('nginx.rb', Syslog::LOG_PID)
>>         log("Started")
>>
>>         read_requests do |r|
>> idrx = /.*(id\=)([A-Za-z0-9]*).*/
>> itagrx = /.*(itag\=)([0-9]*).*/
>> rangerx = /.*(range\=)([0-9\-]*).*/
>>
>> newurl = "http://video-srv.youtube.com.SQUIDINTERNAL/id_" +
>> r.url.match(idrx)[2] + "_itag_" + r.url.match(itagrx)[2] + "_range_" +
>> r.url.match(rangerx)[2]
>>
>>         log("YouTube Video [#{newurl}].")
>>
>>                 newurl
>>         end
>> end
>>
>> main
>> ##code end#
>>
>>
>>
>> --
>> Eliezer Croitoru
>> https://www1.ngtech.co.il
>> IT consulting for Nonprofit organizations
>> eliezer<at>  ngtech.co.il


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer <at> ngtech.co.il
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Hasanen AL-Bana
I get around 40,000 req/min, the server is Dell R510 with Xeon cpu and
48GB of RAM, all disks are SAS (1.2TB)
Reducing the number of url_rewriters cause squid to stop working and
cache.log says more url_rewriters are needed...ah I forgot to say that
I have many URL_REWRITERS beside my store_url rewriters.

On Fri, Apr 27, 2012 at 10:04 AM, Eliezer Croitoru <[hidden email]> wrote:

> On 27/04/2012 09:52, Hasanen AL-Bana wrote:
>>
>> On Fri, Apr 27, 2012 at 7:43 AM, Eliezer Croitoru<[hidden email]>
>>  wrote:
>>>
>>> On 25/04/2012 20:48, Hasanen AL-Bana wrote:
>>>>
>>>>
>>>> wouldn't be better if we save the video chunks ? youtube is streaming
>>>> files with 1.7MB flv chunks, youtube flash player knows how to merge
>>>> them and play them....so the range start and end will alaways be the
>>>> same for the same video as long as user doesn't fast forward it or do
>>>> something nasty...even in that case , squid will just cache that
>>>> chunk...that is possible by rewriting the STORE_URL and including the
>>>> range start&    end
>>>>
>>>>
>>>> On Wed, Apr 25, 2012 at 8:39 PM, Ghassan Gharabli
>>>> <[hidden email]>    wrote:
>>>
>>>
>>> <SNIP>
>>>
>>> i have written a small ruby store_url_rewrite that works with range
>>> argument
>>> in the url.
>>> (on the bottom of this mail)
>>>
>>> it's written in ruby and i took some of andre work at
>>> http://youtube-cache.googlecode.com
>>>
>>> it's not such a fancy script and ment only for this specific youtube
>>> problem.
>>>
>>> i know that youtube didnt changed the this range behavior for the whole
>>> globe cause as for now i'm working from a remote location that still has
>>> no
>>> "range" at all in the url.
>>> so in the same country you can get two different url patterns.
>>>
>>> this script is not cpu friendly (uses more the same amount of regex
>>> lookups
>>> always) but it's not what will bring your server down!!!
>>
>>
>> That is why I am going to write it in perl, in my server I might need
>> to run more than 40 instances on the script and perl is like the
>> fastest thing I have ever tested
>
> i have tried couple of languages to do almost the same thing.
> what i do want is that the code will be readable and efficient.
> i have used until now ruby perl python and JAVA for this specific task ruby
> was fast and usable but JAVA was much more superior to all the others
> combining most of what i needed.
> regex on JAVA was so different then perl and the others so i used the basic
> string classes of JAVA to implement these features.
>
> i hope to see your perl code.
>
> by the way 40 instances are not really needed for most of the servers i have
> seen until now.
> 20 should be more then you need.
>
> what is the "size" of this server? req per sec? bandwidth?cpu? ram? cache
> space?
>
> Regards,
> Eliezer
>
>
>>
>>>
>>> this is only a prototype and if anyone wants to add some more domains and
>>> patterns i will be more then glad to make this script better then it's
>>> now.
>>>
>>> this is one hell of a regex nasty script and i could have used the uri
>>> and
>>> cgi libs in order to make the script more user friendly but i choose to
>>> just
>>> build the script skeleton and move on from there using the basic method
>>> and
>>> classes of ruby.
>>>
>>> the idea of this script is to extract each of the arguments such as id
>>> itag
>>> and ragne one by one and to not use one regex to extract them all because
>>> there are couple of url structures being used by youtube.
>>>
>>> if someone can help me to reorganize this script to allow it to be more
>>> flexible for other sites with numbered cases per
>>> site\domain\url_structure i
>>> will be happy to get any help i can.
>>>
>>> planned for now to be added into this scripts are:
>>> source forge catch all download mirrors into one object
>>> imdb HQ (480P and up) videos
>>> vimeo videos
>>>
>>> if more then just one man will want:
>>> bliptv
>>> some of facebook videos
>>> some other images storage sites.
>>>
>>> if you want me to add anything to my "try to cache" list i will be help
>>> to
>>> hear from you on my e-mail.
>>>
>>> Regards,
>>> Eliezer
>>>
>>>
>>> ##code start##
>>> #!/usr/bin/ruby
>>> require "syslog"
>>>
>>> class SquidRequest
>>>        attr_accessor :url, :user
>>>        attr_reader :client_ip, :method
>>>
>>>        def method=(s)
>>>                @method = s.downcase
>>>        end
>>>
>>>        def client_ip=(s)
>>>                @client_ip = s.split('/').first
>>>        end
>>> end
>>>
>>> def read_requests
>>>        # URL<SP>  client_ip "/" fqdn<SP>  user<SP>  method [<SP>
>>> kvpairs]<NL>
>>>        STDIN.each_line do |ln|
>>>                r = SquidRequest.new
>>>                r.url, r.client_ip, r.user, r.method, *dummy =
>>> ln.rstrip.split(' ')
>>>                (STDOUT<<  "#{yield r}\n").flush
>>>        end
>>> end
>>>
>>> def log(msg)
>>>        Syslog.log(Syslog::LOG_ERR, "%s", msg)
>>> end
>>>
>>> def main
>>>        Syslog.open('nginx.rb', Syslog::LOG_PID)
>>>        log("Started")
>>>
>>>        read_requests do |r|
>>> idrx = /.*(id\=)([A-Za-z0-9]*).*/
>>> itagrx = /.*(itag\=)([0-9]*).*/
>>> rangerx = /.*(range\=)([0-9\-]*).*/
>>>
>>> newurl = "http://video-srv.youtube.com.SQUIDINTERNAL/id_" +
>>> r.url.match(idrx)[2] + "_itag_" + r.url.match(itagrx)[2] + "_range_" +
>>> r.url.match(rangerx)[2]
>>>
>>>        log("YouTube Video [#{newurl}].")
>>>
>>>                newurl
>>>        end
>>> end
>>>
>>> main
>>> ##code end#
>>>
>>>
>>>
>>> --
>>> Eliezer Croitoru
>>> https://www1.ngtech.co.il
>>> IT consulting for Nonprofit organizations
>>> eliezer<at>  ngtech.co.il
>
>
>
> --
> Eliezer Croitoru
> https://www1.ngtech.co.il
> IT consulting for Nonprofit organizations
> eliezer <at> ngtech.co.il
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

johan firdianto
In reply to this post by Christian Loth
Stripping range work for me, now i can save the whole video.
disadvantage of saving chunk file, same video could have some chunks.
right now we could save 1.7M chunk file, nexti time youtube change
range paremeter to 2.3M,
it will download the same video again with different size.

disadvantage of saving the whole video, if the user want to jump to
the end of video, should wait the flash player finish download.
the life is choose.


On Thu, Apr 26, 2012 at 3:39 PM, Christian Loth
<[hidden email]> wrote:

> Hello,
>
> On Thursday 26 April 2012 10:15:33 johan firdianto wrote:
>> This range behaviour related with error occured in youtube flash player ?
>> i'm using url rewrite if any range parameter in uri videos, i stripp off.
>> i can save the whole file of video to directory, by parsing store.log
>> and retrieve the same video using curl/wget.
>> The problem, when the same video is requested, and my rewrite script
>> issues 302  to the file that directory located (under web directory
>> using nginx).
>> error occured message appears in flash player.
>> I refresh many times, still error occured, unless i choose different
>> quality (example: 480p or 240p).
>> any suggest ?
>
> Errors during video playback were the reason I had to do research about range
> in the first place. Simply accepting the range-parameter didn't work for
> obvious reasons: because the cached object was constantly overwritten by
> subsequent chunks.
>
> Stripping the range-parameter didn't work well either for me, because
> youtube's flash player expected a chunk but got the whole file. This also
> resulted in an error for me.
>
> The only solution to work was adding the range-parameter to the filenames
> of the stored files as seen in my nginx configuration in my previous e-mail.
>
> And this solution works currently, and because the range-parameters are
> predictable with simple playback, we also experience a caching and thus
> bandwidth saving effect. Bandwidth saving is seen by two effects: a) serving
> a user from cache, and b) if a user interrupts video playback (clicking from
> one video to another) we only have downloaded a certain number of chunks
> and not the whole video.
>
> HTH,
> - Christian Loth
>
>
>>
>>
>> On Thu, Apr 26, 2012 at 2:41 PM, Christian Loth
>>
>> <[hidden email]> wrote:
>> > Hi,
>> >
>> > On Thursday 26 April 2012 03:44:58 Eliezer Croitoru wrote:
>> >> as i already answered a detailed answer to Ghassan.
>> >> i think that caching the chunks if possible is pretty good thing.
>> >> i tried it with nginx but havnt got the option to try it with
>> >> store_url_rewrite.
>> >
>> > To maybe save you some work, here's how I did it. First of all, I use
>> > nginx as a cache-peer - so no URL rewriting script. Excerpt of my
>> > squid.conf:
>> >
>> >
>> > acl youtube_videos url_regex -i
>> > ^http://[^/]+(\.youtube\.com|\.googlevideo\.com|\.video\.google\.com)/(vi
>> >deoplayback|get_video|videodownload)\? acl range_request req_header Range
>> > .
>> > acl begin_param url_regex -i [?&]begin=
>> > acl id_param url_regex -i [?&]id=
>> > acl itag_param url_regex -i [?&]itag=
>> > acl sver3_param url_regex -i [?&]sver=3
>> > cache_peer 127.0.0.1 parent 8081 0 proxy-only no-query connect-timeout=5
>> > no-digest cache_peer_access 127.0.0.1 allow youtube_videos id_param
>> > itag_param sver3_param !begin_param !range_request cache_peer_access
>> > 127.0.0.1 deny all
>> >
>> > Small note: the range request in this configuration that is denied is the
>> > HTTP-Range-Header, not the range URL parameter! Nginx is of course
>> > running on port 8081 on the same server.
>> >
>> > The important configuration directive in nginx is as follows:
>> >
>> > server {
>> >        listen       127.0.0.1:8081;
>> >
>> >        location / {
>> >                root   /var/cache/proxy/nginx/files;
>> >                try_files
>> > "/id=$arg_id.itag=$arg_itag.range=$arg_range.algo=$arg_algorithm"
>> > @proxy_youtube; }
>> >
>> >        location @proxy_youtube {
>> >                resolver 134.99.128.2;
>> >                proxy_pass http://$host$request_uri;
>> >                proxy_temp_path "/var/cache/proxy/nginx/tmp";
>> >                proxy_ignore_client_abort off;
>> >                proxy_store
>> > "/var/cache/proxy/nginx/files/id=$arg_id.itag=$arg_itag.range=$arg_range.
>> >algo=$arg_algorithm"; proxy_set_header X-YouTube-Cache
>> > "[hidden email]"; proxy_set_header Accept "video/*";
>> >                proxy_set_header User-Agent "YouTube Cache (nginx)";
>> >                proxy_set_header Accept-Encoding "";
>> >                proxy_set_header Accept-Language "";
>> >                proxy_set_header Accept-Charset "";
>> >                proxy_set_header Cache-Control "";
>> >        }
>> > }
>> >
>> > This way, the setup works. Perhaps anyone of you even has a piece of
>> > advice for improving it? E.g. I'm still looking for a way to log nginx
>> > proxy_store hits and misses...?
>> >
>> > Best regards,
>> > - Christian Loth
>>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Eliezer Croitoru-2
In reply to this post by Hasanen AL-Bana
On 27/04/2012 11:56, Hasanen AL-Bana wrote:
> I get around 40,000 req/min, the server is Dell R510 with Xeon cpu and
> 48GB of RAM, all disks are SAS (1.2TB)
> Reducing the number of url_rewriters cause squid to stop working and
> cache.log says more url_rewriters are needed...ah I forgot to say that
> I have many URL_REWRITERS beside my store_url rewriters.
i must say i'm impressed!
it's the second server i'm hearing about in this size and quality of system.

if you do have 40,000 req/min it's makes more sense.
for this kind of system a "compiled" solution is much better with
performance and memory print.
JAVA is one step above the interpreted scripts\programs.
my opinion is that in your case you should use something else then perl
as a url_rewriter store_url_rewrite if the system has kind of static
options.

Regards,
Eliezer

>
> On Fri, Apr 27, 2012 at 10:04 AM, Eliezer Croitoru<[hidden email]>  wrote:
>> On 27/04/2012 09:52, Hasanen AL-Bana wrote:
>>>
>>> On Fri, Apr 27, 2012 at 7:43 AM, Eliezer Croitoru<[hidden email]>
>>>   wrote:
>>>>
>>>> On 25/04/2012 20:48, Hasanen AL-Bana wrote:
>>>>>
>>>>>
>>>>> wouldn't be better if we save the video chunks ? youtube is streaming
>>>>> files with 1.7MB flv chunks, youtube flash player knows how to merge
>>>>> them and play them....so the range start and end will alaways be the
>>>>> same for the same video as long as user doesn't fast forward it or do
>>>>> something nasty...even in that case , squid will just cache that
>>>>> chunk...that is possible by rewriting the STORE_URL and including the
>>>>> range start&      end
>>>>>
>>>>>
>>>>> On Wed, Apr 25, 2012 at 8:39 PM, Ghassan Gharabli
>>>>> <[hidden email]>      wrote:
>>>>
>>>>
>>>> <SNIP>
>>>>
>>>> i have written a small ruby store_url_rewrite that works with range
>>>> argument
>>>> in the url.
>>>> (on the bottom of this mail)
>>>>
>>>> it's written in ruby and i took some of andre work at
>>>> http://youtube-cache.googlecode.com
>>>>
>>>> it's not such a fancy script and ment only for this specific youtube
>>>> problem.
>>>>
>>>> i know that youtube didnt changed the this range behavior for the whole
>>>> globe cause as for now i'm working from a remote location that still has
>>>> no
>>>> "range" at all in the url.
>>>> so in the same country you can get two different url patterns.
>>>>
>>>> this script is not cpu friendly (uses more the same amount of regex
>>>> lookups
>>>> always) but it's not what will bring your server down!!!
>>>
>>>
>>> That is why I am going to write it in perl, in my server I might need
>>> to run more than 40 instances on the script and perl is like the
>>> fastest thing I have ever tested
>>
>> i have tried couple of languages to do almost the same thing.
>> what i do want is that the code will be readable and efficient.
>> i have used until now ruby perl python and JAVA for this specific task ruby
>> was fast and usable but JAVA was much more superior to all the others
>> combining most of what i needed.
>> regex on JAVA was so different then perl and the others so i used the basic
>> string classes of JAVA to implement these features.
>>
>> i hope to see your perl code.
>>
>> by the way 40 instances are not really needed for most of the servers i have
>> seen until now.
>> 20 should be more then you need.
>>
>> what is the "size" of this server? req per sec? bandwidth?cpu? ram? cache
>> space?
>>
>> Regards,
>> Eliezer
>>
>>
>>>
>>>>
>>>> this is only a prototype and if anyone wants to add some more domains and
>>>> patterns i will be more then glad to make this script better then it's
>>>> now.
>>>>
>>>> this is one hell of a regex nasty script and i could have used the uri
>>>> and
>>>> cgi libs in order to make the script more user friendly but i choose to
>>>> just
>>>> build the script skeleton and move on from there using the basic method
>>>> and
>>>> classes of ruby.
>>>>
>>>> the idea of this script is to extract each of the arguments such as id
>>>> itag
>>>> and ragne one by one and to not use one regex to extract them all because
>>>> there are couple of url structures being used by youtube.
>>>>
>>>> if someone can help me to reorganize this script to allow it to be more
>>>> flexible for other sites with numbered cases per
>>>> site\domain\url_structure i
>>>> will be happy to get any help i can.
>>>>
>>>> planned for now to be added into this scripts are:
>>>> source forge catch all download mirrors into one object
>>>> imdb HQ (480P and up) videos
>>>> vimeo videos
>>>>
>>>> if more then just one man will want:
>>>> bliptv
>>>> some of facebook videos
>>>> some other images storage sites.
>>>>
>>>> if you want me to add anything to my "try to cache" list i will be help
>>>> to
>>>> hear from you on my e-mail.
>>>>
>>>> Regards,
>>>> Eliezer
>>>>
>>>>
>>>> ##code start##
>>>> #!/usr/bin/ruby
>>>> require "syslog"
>>>>
>>>> class SquidRequest
>>>>         attr_accessor :url, :user
>>>>         attr_reader :client_ip, :method
>>>>
>>>>         def method=(s)
>>>>                 @method = s.downcase
>>>>         end
>>>>
>>>>         def client_ip=(s)
>>>>                 @client_ip = s.split('/').first
>>>>         end
>>>> end
>>>>
>>>> def read_requests
>>>>         # URL<SP>    client_ip "/" fqdn<SP>    user<SP>    method [<SP>
>>>> kvpairs]<NL>
>>>>         STDIN.each_line do |ln|
>>>>                 r = SquidRequest.new
>>>>                 r.url, r.client_ip, r.user, r.method, *dummy =
>>>> ln.rstrip.split(' ')
>>>>                 (STDOUT<<    "#{yield r}\n").flush
>>>>         end
>>>> end
>>>>
>>>> def log(msg)
>>>>         Syslog.log(Syslog::LOG_ERR, "%s", msg)
>>>> end
>>>>
>>>> def main
>>>>         Syslog.open('nginx.rb', Syslog::LOG_PID)
>>>>         log("Started")
>>>>
>>>>         read_requests do |r|
>>>> idrx = /.*(id\=)([A-Za-z0-9]*).*/
>>>> itagrx = /.*(itag\=)([0-9]*).*/
>>>> rangerx = /.*(range\=)([0-9\-]*).*/
>>>>
>>>> newurl = "http://video-srv.youtube.com.SQUIDINTERNAL/id_" +
>>>> r.url.match(idrx)[2] + "_itag_" + r.url.match(itagrx)[2] + "_range_" +
>>>> r.url.match(rangerx)[2]
>>>>
>>>>         log("YouTube Video [#{newurl}].")
>>>>
>>>>                 newurl
>>>>         end
>>>> end
>>>>
>>>> main
>>>> ##code end#
>>>>
>>>>
>>>>
>>>> --
>>>> Eliezer Croitoru
>>>> https://www1.ngtech.co.il
>>>> IT consulting for Nonprofit organizations
>>>> eliezer<at>    ngtech.co.il
>>
>>
>>
>> --
>> Eliezer Croitoru
>> https://www1.ngtech.co.il
>> IT consulting for Nonprofit organizations
>> eliezer<at>  ngtech.co.il


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer <at> ngtech.co.il
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Eliezer Croitoru-2
In reply to this post by Eliezer Croitoru-2
On 24/04/2012 21:02, Eliezer Croitoru wrote:

> as for some people asking me recently about youtube cache i have checked
> again and found that youtube changed their video uris and added an
> argument called "range" that is managed by the youtube player.
> the original url\uri dosnt include range but the youtube player is using
> this argument to save bandwidth.
>
> i can implement the cahing with ranges on nginx but i dont know yet the
> way that range works.
> it can be based on user bandwidth or "fixed" size of chunkes.
>
> if someone up to the mission of analyzing it a bit more to understand it
> so the "range" cache will be implemented i will be happy to get some
> help with it.
>
> Thanks,
> Eliezer
>
>
as for now the "minimum_object_size 512 bytes" wont do the trick for 302
redirection on squid2.7 because the 302 response is 963 big size.
so i have used:
minimum_object_size 1024 bytes
just to make sure it will work.
and also this is a youtube videos dedicated server so it's on with this
limit.

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer <at> ngtech.co.il
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Ghassan
Hello Eliezer,

Are you trying to save all video chunks into same parts or capture /
download the whole video object through CURL or whatever! but i dont
think it should work since it will occure an error with the new
Youtube Player.

What I have reached lately is saving same youtube video chunks  that
has  youtube 360p itag=34 ,480p itag=35 without saving its itag since
i want to save more bandwidth (thats why i only wrote scripts to you
as an example) which means if someone wants to watch 480p then he
would get the cached 360p contents thats why i didnt add the itag but
if he thought of watching 720p and above then another script would
catch it matching ITAG 37 , 22 ... I know that is not the best
solution but at least its working pretty well with no erros at all as
long as the client can always fast forward .

Im using Squid 2.7 Stable9 compiled on windows 64-bit with PERL x64.

Regarding the 302 Redirection  .. I have made sure to update the
source file client_side.c to fix the loop 302 Redirection but really I
dont have to worry now about anything so what is your target regarding
Youtube with argument Range and whats the problem till now ?

I have RAID with 5 HDD and the average HTTP Requests per minute :
2732.6 and because I want to save more bandwidth I try to analyze HTTP
Requests so i can always update my perl script to match most wanted
websites targetting Videos , Mp3 etc.

For a second I thought of maybe someone would help to compile an
intelligent external helper script that would capture  the whole
byte-range and I know it is really hard to do that since we are
dealing with byte-range.

I only have one question that is always teasing me .. what are the
comnparison between SQUID and BLUE COAT so is it because it is a
hardware perfromance or just it has more tricks to cache everything
and reach a maximum ratio ?



Ghassan



On Mon, Apr 30, 2012 at 1:29 AM, Eliezer Croitoru <[hidden email]> wrote:

> On 24/04/2012 21:02, Eliezer Croitoru wrote:
>>
>> as for some people asking me recently about youtube cache i have checked
>> again and found that youtube changed their video uris and added an
>> argument called "range" that is managed by the youtube player.
>> the original url\uri dosnt include range but the youtube player is using
>> this argument to save bandwidth.
>>
>> i can implement the cahing with ranges on nginx but i dont know yet the
>> way that range works.
>> it can be based on user bandwidth or "fixed" size of chunkes.
>>
>> if someone up to the mission of analyzing it a bit more to understand it
>> so the "range" cache will be implemented i will be happy to get some
>> help with it.
>>
>> Thanks,
>> Eliezer
>>
>>
> as for now the "minimum_object_size 512 bytes" wont do the trick for 302
> redirection on squid2.7 because the 302 response is 963 big size.
> so i have used:
> minimum_object_size 1024 bytes
> just to make sure it will work.
> and also this is a youtube videos dedicated server so it's on with this
> limit.
>
> Regards,
>
> Eliezer
>
> --
> Eliezer Croitoru
> https://www1.ngtech.co.il
> IT consulting for Nonprofit organizations
> eliezer <at> ngtech.co.il
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: anyone knows some info about youtube "range" parameter?

Eliezer Croitoru-2
On 30/04/2012 02:18, Ghassan Gharabli wrote:

> Hello Eliezer,
>
> Are you trying to save all video chunks into same parts or capture /
> download the whole video object through CURL or whatever! but i dont
> think it should work since it will occure an error with the new
> Youtube Player.
>
> What I have reached lately is saving same youtube video chunks  that
> has  youtube 360p itag=34 ,480p itag=35 without saving its itag since
> i want to save more bandwidth (thats why i only wrote scripts to you
> as an example) which means if someone wants to watch 480p then he
> would get the cached 360p contents thats why i didnt add the itag but
> if he thought of watching 720p and above then another script would
> catch it matching ITAG 37 , 22 ... I know that is not the best
> solution but at least its working pretty well with no erros at all as
> long as the client can always fast forward .
>
> Im using Squid 2.7 Stable9 compiled on windows 64-bit with PERL x64.
>
> Regarding the 302 Redirection  .. I have made sure to update the
> source file client_side.c to fix the loop 302 Redirection but really I
> dont have to worry now about anything so what is your target regarding
> Youtube with argument Range and whats the problem till now ?
>
> I have RAID with 5 HDD and the average HTTP Requests per minute :
> 2732.6 and because I want to save more bandwidth I try to analyze HTTP
> Requests so i can always update my perl script to match most wanted
> websites targetting Videos , Mp3 etc.
>
> For a second I thought of maybe someone would help to compile an
> intelligent external helper script that would capture  the whole
> byte-range and I know it is really hard to do that since we are
> dealing with byte-range.
>
> I only have one question that is always teasing me .. what are the
> comnparison between SQUID and BLUE COAT so is it because it is a
> hardware perfromance or just it has more tricks to cache everything
> and reach a maximum ratio ?
>
>
>
> Ghassan
i was messing with store_url_rewrite and url_rewrite quite some time
just for knowledge.

i was researching every concept exists with squid until now.
a while back (year or more)  i wrote store_url_rewrite using java and
posted the code somewhere.
the reason i was using java was because it's the fastest and simples
from all other languages i know (ruby perl python).
i was saving bandwidth using nginx because it was simple to setup.
i dont really like the idea of faking my users about the quality and
also it can make a very problematic state that the user can get partial
HQ content what will break the user from watching videos.

i dont really have any problems with the ranges.
i just noticed that there are providers in my country that are not using
the "range" parameter but the "begin" parameter...

i will be very happy to get the client_side.c patch to fix the 302 loop.

the problem with external_helper is that it's not really needed.
if you need something you can use ICAP for that but you still there is a
lot of work to be dont so as far it seems that store_url_rewrite is the
best option.

BLUECOAT has option to relate objects also using ETAG and other objects
parameters so it makes it a very robust caching system.

nginx does the work just fine for youtube videos but it seems like some
headers are messing and should be added.

by the way,
What video\mp3 sites are you caching using your scripts?


Eliezer

>
>
>
> On Mon, Apr 30, 2012 at 1:29 AM, Eliezer Croitoru<[hidden email]>  wrote:
>> On 24/04/2012 21:02, Eliezer Croitoru wrote:
>>>
>>> as for some people asking me recently about youtube cache i have checked
>>> again and found that youtube changed their video uris and added an
>>> argument called "range" that is managed by the youtube player.
>>> the original url\uri dosnt include range but the youtube player is using
>>> this argument to save bandwidth.
>>>
>>> i can implement the cahing with ranges on nginx but i dont know yet the
>>> way that range works.
>>> it can be based on user bandwidth or "fixed" size of chunkes.
>>>
>>> if someone up to the mission of analyzing it a bit more to understand it
>>> so the "range" cache will be implemented i will be happy to get some
>>> help with it.
>>>
>>> Thanks,
>>> Eliezer
>>>
>>>
>> as for now the "minimum_object_size 512 bytes" wont do the trick for 302
>> redirection on squid2.7 because the 302 response is 963 big size.
>> so i have used:
>> minimum_object_size 1024 bytes
>> just to make sure it will work.
>> and also this is a youtube videos dedicated server so it's on with this
>> limit.
>>
>> Regards,
>>
>> Eliezer
>>
>> --
>> Eliezer Croitoru
>> https://www1.ngtech.co.il
>> IT consulting for Nonprofit organizations
>> eliezer<at>  ngtech.co.il


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer <at> ngtech.co.il
12
Loading...