Compare commits

...

41 commits

Author SHA1 Message Date
GitHub Actions Bot
19731cc7c4 update M3U8 2026-01-27 23:00:28 -05:00
GitHub Actions Bot
a8c83675b2 update EPG 2026-01-28 03:59:31 +00:00
GitHub Actions Bot
58c317904d health log 2026-01-28 03:55:34 +00:00
GitHub Actions Bot
0baef6611e update M3U8 2026-01-27 22:30:58 -05:00
GitHub Actions Bot
bfde140286 update M3U8 2026-01-27 22:02:27 -05:00
GitHub Actions Bot
a2b377565a update M3U8 2026-01-27 21:30:35 -05:00
GitHub Actions Bot
0be3f34fc5 update M3U8 2026-01-27 21:01:51 -05:00
doms9
00000d9844 e
re-add totalsportek.py
2026-01-27 20:48:44 -05:00
GitHub Actions Bot
3875d181cd update M3U8 2026-01-27 20:32:20 -05:00
GitHub Actions Bot
ad3ba8e34d health log 2026-01-27 20:18:36 -05:00
doms9
00000d968d e
fix livetvsx.py scraping
edit health check script
2026-01-27 20:14:27 -05:00
GitHub Actions Bot
53686dcc16 update M3U8 2026-01-27 19:31:35 -05:00
doms9
00000d947f e
add more logging to epg-fetch.py and health.sh
2026-01-27 19:21:28 -05:00
GitHub Actions Bot
6f01743391 update M3U8 2026-01-27 19:03:05 -05:00
GitHub Actions Bot
efe30de3e6 update M3U8 2026-01-27 18:31:49 -05:00
GitHub Actions Bot
8a84ae350e update EPG 2026-01-27 23:11:29 +00:00
GitHub Actions Bot
13ab3b4d8c update M3U8 2026-01-27 18:01:24 -05:00
GitHub Actions Bot
032d32825e update M3U8 2026-01-27 17:31:23 -05:00
GitHub Actions Bot
51ca28fed2 update M3U8 2026-01-27 17:00:48 -05:00
doms9
00000d9d0f e 2026-01-27 16:34:41 -05:00
GitHub Actions Bot
38357e05f4 update M3U8 2026-01-27 16:31:12 -05:00
GitHub Actions Bot
47364423ae update M3U8 2026-01-27 16:01:23 -05:00
GitHub Actions Bot
c9299aca00 health log 2026-01-27 20:45:09 +00:00
GitHub Actions Bot
10b509e86b update M3U8 2026-01-27 15:31:21 -05:00
GitHub Actions Bot
de35a7c753 update M3U8 2026-01-27 15:05:31 -05:00
GitHub Actions Bot
67ac4e9522 update M3U8 2026-01-27 14:31:21 -05:00
GitHub Actions Bot
a570a49311 update EPG 2026-01-27 19:07:11 +00:00
GitHub Actions Bot
83a634c044 update M3U8 2026-01-27 14:02:59 -05:00
GitHub Actions Bot
d13f34befd update M3U8 2026-01-27 13:31:53 -05:00
GitHub Actions Bot
a65f6aa7d0 update M3U8 2026-01-27 13:03:19 -05:00
GitHub Actions Bot
d1aed35616 update M3U8 2026-01-27 12:03:42 -05:00
GitHub Actions Bot
1259360fd1 update M3U8 2026-01-27 11:01:46 -05:00
GitHub Actions Bot
9f8c54952e update M3U8 2026-01-27 10:01:48 -05:00
GitHub Actions Bot
97f83a149f health log 2026-01-27 14:56:19 +00:00
GitHub Actions Bot
67b1a30621 update M3U8 2026-01-27 09:02:41 -05:00
GitHub Actions Bot
43b1d23f4c update M3U8 2026-01-27 08:02:58 -05:00
GitHub Actions Bot
06d8992887 update EPG 2026-01-27 11:00:28 +00:00
GitHub Actions Bot
d5acde16f9 health log 2026-01-27 08:55:53 +00:00
GitHub Actions Bot
9e2f2a1790 update M3U8 2026-01-26 23:31:28 -05:00
GitHub Actions Bot
12e9ea57fc update EPG 2026-01-27 04:01:38 +00:00
GitHub Actions Bot
e084124cd6 update M3U8 2026-01-26 23:00:42 -05:00
12 changed files with 107745 additions and 101218 deletions

File diff suppressed because it is too large Load diff

204573
M3U8/TV.xml

File diff suppressed because one or more lines are too long

View file

@ -28,10 +28,12 @@ http://fl1.moveonjoy.com/Aspire/index.m3u8
http://fl1.moveonjoy.com/BBC_AMERICA/index.m3u8
#EXTINF:-1 tvg-chno="10" tvg-id="BBC.News.(North.America).HD.us2" tvg-name="BBC World News" tvg-logo="http://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s89542_dark_360w_270h.png" group-title="TV",BBC World News
http://fl1.moveonjoy.com/BBC_WORLD_NEWS/index.m3u8
#EXTVLCOPT:http-user-agent=curl/8.5.0
http://lucidhosting.xyz:82/sandriassoc@gmail.com/Sm8G4ddxoW/1723
#EXTINF:-1 tvg-chno="11" tvg-id="BET.HD.us2" tvg-name="BET" tvg-logo="https://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s10051_dark_360w_270h.png" group-title="TV",BET
https://streamer1.nexgen.bz/BET/index.m3u8
#EXTVLCOPT:http-user-agent=curl/8.5.0
http://lucidhosting.xyz:82/sandriassoc@gmail.com/Sm8G4ddxoW/2071
#EXTINF:-1 tvg-chno="12" tvg-id="Big.Ten.Network.HD.us2" tvg-name="Big Ten Network" tvg-logo="https://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s56783_dark_360w_270h.png" group-title="TV",Big Ten Network
http://23.237.104.106:8080/USA_BTN/index.m3u8
@ -148,7 +150,8 @@ http://mytvstream.net:8080/live/A1Jay5/362586/66795.m3u8
http://mytvstream.net:8080/live/A1Jay5/362586/58827.m3u8
#EXTINF:-1 tvg-chno="50" tvg-id="FanDuel.Sports.Network.Ohio.(Cleveland).HDTV.us" tvg-name="FDSN Ohio" tvg-logo="http://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s49691_dark_360w_270h.png" group-title="TV",FDSN Ohio
http://mytvstream.net:8080/live/A1Jay5/362586/17752.m3u8
#EXTVLCOPT:http-user-agent=curl/8.5.0
http://lucidhosting.xyz:82/sandriassoc@gmail.com/Sm8G4ddxoW/222126
#EXTINF:-1 tvg-chno="51" tvg-id="FanDuel.Sports.Network.Oklahoma.24/7.HDTV.(Tulsa).us" tvg-name="FDSN Oklahoma" tvg-logo="https://i.gyazo.com/80ad6fd142cd67f06eef58d9ce5aa72b.png" group-title="TV",FDSN Oklahoma
http://mytvstream.net:8080/live/A1Jay5/362586/20934.m3u8
@ -157,13 +160,16 @@ http://mytvstream.net:8080/live/A1Jay5/362586/20934.m3u8
http://mytvstream.net:8080/live/A1Jay5/362586/221151.m3u8
#EXTINF:-1 tvg-chno="53" tvg-id="FanDuel.Sports.Network.Southeast.HDTV.(Mont./Birm./Dothan/Mobile.AL).us" tvg-name="FDSN Southeast" tvg-logo="http://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s20789_dark_360w_270h.png" group-title="TV",FDSN Southeast
http://mytvstream.net:8080/live/A1Jay5/362586/2213.m3u8
#EXTVLCOPT:http-user-agent=curl/8.5.0
http://lucidhosting.xyz:82/sandriassoc@gmail.com/Sm8G4ddxoW/222130
#EXTINF:-1 tvg-chno="54" tvg-id="FanDuel.Sports.Network.Southwest.HDTV.24/7.(Main).us" tvg-name="FDSN Southwest" tvg-logo="https://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s59629_dark_360w_270h.png" group-title="TV",FDSN Southwest
http://mytvstream.net:8080/live/A1Jay5/362586/21843.m3u8
#EXTVLCOPT:http-user-agent=curl/8.5.0
http://lucidhosting.xyz:82/sandriassoc@gmail.com/Sm8G4ddxoW/220452
#EXTINF:-1 tvg-chno="55" tvg-id="FanDuel.Sports.Network.Sun.South.24/7.HDTV.(South.Marlins,.Rays,.Heat).us" tvg-name="FDSN Sun" tvg-logo="http://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s61084_dark_360w_270h.png" group-title="TV",FDSN Sun
http://mytvstream.net:8080/live/A1Jay5/362586/104917.m3u8
#EXTVLCOPT:http-user-agent=curl/8.5.0
http://lucidhosting.xyz:82/sandriassoc@gmail.com/Sm8G4ddxoW/222132
#EXTINF:-1 tvg-chno="56" tvg-id="FanDuel.Sports.Network.West.HDTV.us" tvg-name="FDSN West" tvg-logo="https://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s59627_dark_360w_270h.png" group-title="TV",FDSN West
http://mytvstream.net:8080/live/A1Jay5/362586/20932.m3u8
@ -211,7 +217,8 @@ https://fl1.moveonjoy.com/FXX/index.m3u8
http://fl1.moveonjoy.com/FYI/index.m3u8
#EXTINF:-1 tvg-chno="71" tvg-id="Game.Show.Network.HD.us2" tvg-name="Game Show Network" tvg-logo="http://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s14909_dark_360w_270h.png" group-title="TV",Game Show Network
https://streamer1.nexgen.bz/GSN/index.m3u8
#EXTVLCOPT:http-user-agent=curl/8.5.0
http://lucidhosting.xyz:82/sandriassoc@gmail.com/Sm8G4ddxoW/1948
#EXTINF:-1 tvg-chno="72" tvg-id="get.us2" tvg-name="getTV" tvg-logo="http://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s82563_dark_360w_270h.png" group-title="TV",getTV
http://fl1.moveonjoy.com/GET_TV/index.m3u8
@ -235,7 +242,7 @@ https://fl1.moveonjoy.com/HALLMARK_MOVIES_MYSTERIES/index.m3u8
http://fl1.moveonjoy.com/HBO/index.m3u8
#EXTINF:-1 tvg-chno="79" tvg-id="HBO2.HD.us2" tvg-name="HBO 2" tvg-logo="http://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s68140_dark_360w_270h.png" group-title="TV",HBO 2
http://fl1.moveonjoy.com/HBO_2/index.m3u8
http://lucidhosting.xyz:82/sandriassoc@gmail.com/Sm8G4ddxoW/2071
#EXTINF:-1 tvg-chno="80" tvg-id="HBO.Comedy.HD.us2" tvg-name="HBO Comedy" tvg-logo="http://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s59839_dark_360w_270h.png" group-title="TV",HBO Comedy
http://fl1.moveonjoy.com/HBO_COMEDY/index.m3u8
@ -436,4 +443,4 @@ http://hardcoremedia.xyz:80/NW3Vk7xXwW/8375773282/129973
https://fl1.moveonjoy.com/VICELAND/index.m3u8
#EXTINF:-1 tvg-chno="146" tvg-id="Yes.Network.us2" tvg-name="YES Network" tvg-logo="http://schedulesdirect-api20141201-logos.s3.dualstack.us-east-1.amazonaws.com/stationLogos/s30017_dark_360w_270h.png" group-title="TV",YES Network
https://fl1.moveonjoy.com/YES_NETWORK/index.m3u8
https://fl1.moveonjoy.com/YES_NETWORK/index.m3u8

View file

@ -13,7 +13,9 @@ BASE_M3U8 = Path(__file__).parent / "base.m3u8"
EPG_FILE = Path(__file__).parent / "TV.xml"
EPG_URLS = [
LIVE_IMG = "https://i.gyazo.com/978f2eb4a199ca5b56b447aded0cb9e3.png"
EPG_URLS = {
"https://epgshare01.online/epgshare01/epg_ripper_CA2.xml.gz",
"https://epgshare01.online/epgshare01/epg_ripper_DUMMY_CHANNELS.xml.gz",
"https://epgshare01.online/epgshare01/epg_ripper_FANDUEL1.xml.gz",
@ -22,9 +24,7 @@ EPG_URLS = [
"https://epgshare01.online/epgshare01/epg_ripper_US2.xml.gz",
"https://epgshare01.online/epgshare01/epg_ripper_US_LOCALS1.xml.gz",
"https://i.mjh.nz/Roku/all.xml.gz",
]
LIVE_IMG = "https://i.gyazo.com/978f2eb4a199ca5b56b447aded0cb9e3.png"
}
DUMMIES = {
"Basketball.Dummy.us": LIVE_IMG,
@ -48,39 +48,47 @@ REPLACE_IDs = {
def get_tvg_ids() -> dict[str, str]:
base_m3u8 = BASE_M3U8.read_text(encoding="utf-8").splitlines()
tvg: dict[str, str] = {}
tvg = {}
for line in BASE_M3U8.read_text(encoding="utf-8").splitlines():
if not line.startswith("#EXTINF"):
continue
for line in base_m3u8:
if line.startswith("#EXTINF"):
tvg_id = re.search(r'tvg-id="([^"]*)"', line)[1]
tvg_logo = re.search(r'tvg-logo="([^"]*)"', line)[1]
tvg_id = re.search(r'tvg-id="([^"]*)"', line)
tvg_logo = re.search(r'tvg-logo="([^"]*)"', line)
tvg[tvg_id] = tvg_logo
if tvg_id:
tvg[tvg_id[1]] = tvg_logo[1] if tvg_logo else None
tvg |= DUMMIES
tvg |= {v["old"]: LIVE_IMG for v in REPLACE_IDs.values()}
return tvg
async def fetch_xml(url: str) -> ET.Element | None:
if not (html_data := await network.request(url, log=log)):
if not (xml_data := await network.request(url, log=log)):
return
try:
decompressed_data = gzip.decompress(html_data.content)
log.info(f'Parsing XML from "{url}"')
return ET.fromstring(decompressed_data)
data = gzip.decompress(xml_data.content)
return ET.fromstring(data)
except Exception as e:
log.error(f'Failed to decompress and parse XML from "{url}": {e}')
log.error(f'Failed to parse from "{url}": {e}')
return
def hijack_id(
root: ET.Element,
*,
old: str,
new: str,
text: str,
root: ET.Element,
) -> None:
og_channel = root.find(f"./channel[@id='{old}']")
@ -90,7 +98,7 @@ def hijack_id(
display_name = og_channel.find("display-name")
if display_name is not None:
if (display_name := og_channel.find("display-name")) is not None:
new_channel.append(ET.Element("display-name", display_name.attrib))
new_channel[-1].text = text
@ -114,9 +122,7 @@ def hijack_id(
new_program.append(new_child)
for tag_name in ["title", "desc", "sub-title"]:
tag = new_program.find(tag_name)
if tag is not None:
if (tag := new_program.find(tag_name)) is not None:
tag.text = text
root.remove(program)
@ -129,48 +135,60 @@ async def main() -> None:
tvg_ids = get_tvg_ids()
tvg_ids |= DUMMIES | {v["old"]: LIVE_IMG for v in REPLACE_IDs.values()}
parsed_tvg_ids: set[str] = set()
root = ET.Element("tv")
tasks = [fetch_xml(url) for url in EPG_URLS]
results = await asyncio.gather(*tasks)
for epg_data in results:
if epg_data is None:
continue
epgs = await asyncio.gather(*(fetch_xml(url) for url in EPG_URLS))
for epg_data in (epg for epg in epgs if epg is not None):
for channel in epg_data.findall("channel"):
if (channel_id := channel.get("id")) in tvg_ids:
for icon_tag in channel.findall("icon"):
if logo := tvg_ids.get(channel_id):
icon_tag.set("src", logo)
if (channel_id := channel.get("id")) not in tvg_ids:
continue
if (url_tag := channel.find("url")) is not None:
channel.remove(url_tag)
parsed_tvg_ids.add(channel_id)
root.append(channel)
for icon_tag in channel.findall("icon"):
if logo := tvg_ids.get(channel_id):
icon_tag.set("src", logo)
if (url_tag := channel.find("url")) is not None:
channel.remove(url_tag)
root.append(channel)
for program in epg_data.findall("programme"):
if program.get("channel") in tvg_ids:
title_text = program.find("title").text
subtitle = program.find("sub-title")
if program.get("channel") not in tvg_ids:
continue
if (
title_text in ["NHL Hockey", "Live: NFL Football"]
and subtitle is not None
):
program.find("title").text = f"{title_text} {subtitle.text}"
title_text = program.find("title").text
root.append(program)
subtitle = program.find("sub-title")
for k, v in REPLACE_IDs.items():
hijack_id(**v, text=k, root=root)
if (
title_text in ["NHL Hockey", "Live: NFL Football"]
and subtitle is not None
):
program.find("title").text = f"{title_text} {subtitle.text}"
root.append(program)
for title, ids in REPLACE_IDs.items():
hijack_id(root, **ids, text=title)
if missing_ids := set(tvg_ids) - parsed_tvg_ids:
log.warning(f"Missed {len(missing_ids)} TVG ID(s)")
for channel_id in missing_ids:
log.warning(f"Missing: {channel_id}")
tree = ET.ElementTree(root)
tree.write(EPG_FILE, encoding="utf-8", xml_declaration=True)
tree.write(
EPG_FILE,
encoding="utf-8",
xml_declaration=True,
)
log.info(f"EPG saved to {EPG_FILE.resolve()}")

File diff suppressed because it is too large Load diff

View file

@ -21,6 +21,7 @@ from scrapers import (
streamfree,
streamhub,
streamsgate,
totalsportek,
tvpass,
watchfooty,
webcast,
@ -70,6 +71,7 @@ async def main() -> None:
asyncio.create_task(streamcenter.scrape(xtrnl_brwsr)),
# asyncio.create_task(streamhub.scrape(xtrnl_brwsr)),
asyncio.create_task(streamsgate.scrape(xtrnl_brwsr)),
asyncio.create_task(totalsportek.scrape(hdl_brwsr)),
asyncio.create_task(webcast.scrape(hdl_brwsr)),
asyncio.create_task(watchfooty.scrape(xtrnl_brwsr)),
]
@ -80,10 +82,10 @@ async def main() -> None:
asyncio.create_task(pawa.scrape()),
asyncio.create_task(roxie.scrape()),
asyncio.create_task(shark.scrape()),
asyncio.create_task(streambtw.scrape()),
# asyncio.create_task(streambtw.scrape()),
asyncio.create_task(streamfree.scrape()),
asyncio.create_task(tvpass.scrape()),
asyncio.create_task(xstreameast.scrape()),
# asyncio.create_task(xstreameast.scrape()),
]
await asyncio.gather(*(pw_tasks + httpx_tasks))
@ -112,6 +114,7 @@ async def main() -> None:
| streamfree.urls
| streamhub.urls
| streamsgate.urls
| totalsportek.urls
| tvpass.urls
| watchfooty.urls
| webcast.urls

View file

@ -10,12 +10,12 @@ log = get_logger(__name__)
urls: dict[str, dict[str, str | float]] = {}
TAG = "LTVSX"
XML_CACHE = Cache(f"{TAG}-xml", exp=28_000)
TAG = "LIVETVSX"
CACHE_FILE = Cache(TAG, exp=10_800)
XML_CACHE = Cache(f"{TAG}-xml", exp=28_000)
BASE_URL = "https://cdn.livetv861.me/rss/upcoming_en.xml"
VALID_SPORTS = {"NBA", "NHL", "NFL", "NCAA", "MLB"}
@ -160,8 +160,8 @@ async def get_events(cached_keys: list[str]) -> list[dict[str, str]]:
live = []
start_ts = now.delta(minutes=-30).timestamp()
end_ts = now.delta(minutes=30).timestamp()
start_ts = now.delta(hours=-1).timestamp()
end_ts = now.delta(minutes=5).timestamp()
for k, v in events.items():
if k in cached_keys:
@ -193,7 +193,7 @@ async def scrape(browser: Browser) -> None:
log.info(f"Processing {len(events)} new URL(s)")
if events:
async with network.event_context(browser) as context:
async with network.event_context(browser, ignore_https=True) as context:
for i, ev in enumerate(events, start=1):
async with network.event_page(context) as page:
handler = partial(
@ -210,10 +210,11 @@ async def scrape(browser: Browser) -> None:
log=log,
)
sport, event, ts = (
sport, event, ts, link = (
ev["sport"],
ev["event"],
ev["timestamp"],
ev["link"],
)
key = f"[{sport}] {event} ({TAG})"
@ -226,6 +227,7 @@ async def scrape(browser: Browser) -> None:
"base": "https://livetv.sx/enx/",
"timestamp": ts,
"id": tvg_id or "Live.Event.us",
"link": link,
}
cached_urls[key] = entry

View file

@ -0,0 +1,146 @@
from functools import partial
from urllib.parse import urljoin, urlparse
from playwright.async_api import Browser
from selectolax.parser import HTMLParser
from .utils import Cache, Time, get_logger, leagues, network
log = get_logger(__name__)
urls: dict[str, dict[str, str | float]] = {}
TAG = "TOTALSPRTK"
CACHE_FILE = Cache(TAG, exp=28_800)
BASE_URL = "https://live3.totalsportek777.com/"
def fix_txt(s: str) -> str:
s = " ".join(s.split())
return s.upper() if s.islower() else s
async def get_events(cached_keys: list[str]) -> list[dict[str, str]]:
events = []
if not (html_data := await network.request(BASE_URL, log=log)):
return events
soup = HTMLParser(html_data.content)
sport = "Live Event"
for node in soup.css("a"):
if not node.attributes.get("class"):
continue
if (parent := node.parent) and "my-1" in parent.attributes.get("class", ""):
if span := node.css_first("span"):
sport = span.text(strip=True)
sport = fix_txt(sport)
if not (teams := [t.text(strip=True) for t in node.css(".col-7 .col-12")]):
continue
if not (href := node.attributes.get("href")):
continue
href = urlparse(href).path if href.startswith("http") else href
if not (time_node := node.css_first(".col-3 span")):
continue
if time_node.text(strip=True) != "MatchStarted":
continue
event_name = fix_txt(" vs ".join(teams))
if f"[{sport}] {event_name} ({TAG})" in cached_keys:
continue
events.append(
{
"sport": sport,
"event": event_name,
"link": urljoin(BASE_URL, href),
}
)
return events
async def scrape(browser: Browser) -> None:
cached_urls = CACHE_FILE.load()
valid_urls = {k: v for k, v in cached_urls.items() if v["url"]}
valid_count = cached_count = len(valid_urls)
urls.update(valid_urls)
log.info(f"Loaded {cached_count} event(s) from cache")
log.info(f'Scraping from "{BASE_URL}"')
events = await get_events(cached_urls.keys())
log.info(f"Processing {len(events)} new URL(s)")
if events:
now = Time.clean(Time.now())
async with network.event_context(browser) as context:
for i, ev in enumerate(events, start=1):
async with network.event_page(context) as page:
handler = partial(
network.process_event,
url=ev["link"],
url_num=i,
page=page,
log=log,
)
url = await network.safe_process(
handler,
url_num=i,
semaphore=network.HTTP_S,
log=log,
)
sport, event, link = (
ev["sport"],
ev["event"],
ev["link"],
)
key = f"[{sport}] {event} ({TAG})"
tvg_id, logo = leagues.get_tvg_info(sport, event)
entry = {
"url": url,
"logo": logo,
"base": link,
"timestamp": now.timestamp(),
"id": tvg_id or "Live.Event.us",
"link": link,
}
cached_urls[key] = entry
if url:
valid_count += 1
urls[key] = entry
if new_count := valid_count - cached_count:
log.info(f"Collected and cached {new_count} new event(s)")
else:
log.info("No new events found")
CACHE_FILE.write(cached_urls)

View file

@ -811,65 +811,94 @@
"teams": {
"NBA": [
"76ers",
"Atlanta",
"Atlanta Hawks",
"Blazers",
"Boston",
"Boston Celtics",
"Brooklyn Nets",
"Bucks",
"Bulls",
"Cavaliers",
"Celtics",
"Charlotte",
"Charlotte Hornets",
"Chicago",
"Chicago Bulls",
"Cleveland",
"Cleveland Cavaliers",
"Clippers",
"Dallas",
"Dallas Mavericks",
"Denver",
"Denver Nuggets",
"Detroit",
"Detroit Pistons",
"Golden State",
"Golden State Warriors",
"Grizzlies",
"Hawks",
"Heat",
"Hornets",
"Houston",
"Houston Rockets",
"Indiana",
"Indiana Pacers",
"Jazz",
"Kings",
"Knicks",
"Lakers",
"Los Angeles",
"Los Angeles Clippers",
"Los Angeles Lakers",
"Magic",
"Mavericks",
"Memphis",
"Memphis Grizzlies",
"Miami",
"Miami Heat",
"Milwaukee",
"Milwaukee Bucks",
"Minnesota",
"Minnesota Timberwolves",
"Nets",
"New Orleans",
"New Orleans Pelicans",
"New York",
"New York Knicks",
"Nuggets",
"Oklahoma",
"Oklahoma City",
"Oklahoma City Thunder",
"Orlando",
"Orlando Magic",
"Pacers",
"Pelicans",
"Philadelphia",
"Philadelphia 76ers",
"Phoenix",
"Phoenix Suns",
"Pistons",
"Portland",
"Portland Trail Blazers",
"Raptors",
"Rockets",
"Sacramento",
"Sacramento Kings",
"San Antonio",
"San Antonio Spurs",
"Sixers",
"Spurs",
"Suns",
"Thunder",
"Timberwolves",
"Toronto",
"Toronto Raptors",
"Trail Blazers",
"Utah",
"Utah Jazz",
"Warriors",
"Washington",
"Washington Wizards",
"Wizards",
"Wolves"

View file

@ -129,12 +129,14 @@ class Network:
async def event_context(
browser: Browser,
stealth: bool = True,
ignore_https: bool = False,
) -> AsyncGenerator[BrowserContext, None]:
context: BrowserContext | None = None
try:
context = await browser.new_context(
user_agent=Network.UA if stealth else None,
ignore_https_errors=ignore_https,
viewport={"width": 1366, "height": 768},
device_scale_factor=1,
locale="en-US",

View file

@ -9,20 +9,24 @@ STATUSLOG=$(mktemp)
get_status() {
local url="$1"
local channel="$2"
local index="$3"
local total="$4"
local attempt response status_code
[[ "$url" != http* ]] && return
printf '[%d/%d] Checking %s\n' "$((index + 1))" "$total" "$url"
for attempt in $(seq 1 "$RETRY_COUNT"); do
response=$(
curl -skL \
-A "$UA" \
-H "Accept: */*" \
-H "Accept-Language: en-US,en;q=0.9" \
-H "Accept-Encoding: gzip, deflate, br" \
-H "Connection: keep-alive" \
-o /dev/null \
--max-time 15 \
--compressed \
--max-time 30 \
-w "%{http_code}" \
"$url" 2>&1
)
@ -47,7 +51,7 @@ get_status() {
status_code="$response"
case "$status_code" in
200)
2* | 3*)
echo "PASS" >>"$STATUSLOG"
;;
@ -71,6 +75,7 @@ get_status() {
check_links() {
echo "Checking links from: $base_file"
total_urls=$(grep -cE '^https?://' "$base_file")
channel_num=0
name=""
@ -86,14 +91,14 @@ check_links() {
elif [[ "$line" =~ ^https?:// ]]; then
while (($(jobs -r | wc -l) >= MAX_JOBS)); do sleep 0.2; done
get_status "$line" "$name" &
get_status "$line" "$name" "$channel_num" "$total_urls" &
((channel_num++))
fi
done < <(cat "$base_file")
wait
echo "Done."
echo -e "\nDone."
}
write_readme() {

View file

@ -1,17 +1,18 @@
## Base Log @ 2026-01-27 03:57 UTC
## Base Log @ 2026-01-28 03:55 UTC
### ✅ Working Streams: 136<br>❌ Dead Streams: 9
### ✅ Working Streams: 136<br>❌ Dead Streams: 10
| Channel | Error (Code) | Link |
| ------- | ------------ | ---- |
| BBC World News | HTTP Error (404) | `http://fl1.moveonjoy.com/BBC_WORLD_NEWS/index.m3u8` |
| FDSN Southeast | HTTP Error (403) | `http://mytvstream.net:8080/live/A1Jay5/362586/2213.m3u8` |
| FDSN Southwest | HTTP Error (403) | `http://mytvstream.net:8080/live/A1Jay5/362586/21843.m3u8` |
| FDSN Ohio | HTTP Error (502) | `http://lucidhosting.xyz:82/sandriassoc@gmail.com/Sm8G4ddxoW/222126` |
| FDSN Oklahoma | HTTP Error (403) | `http://mytvstream.net:8080/live/A1Jay5/362586/20934.m3u8` |
| FDSN Southeast | HTTP Error (502) | `http://lucidhosting.xyz:82/sandriassoc@gmail.com/Sm8G4ddxoW/222130` |
| FDSN Southwest | HTTP Error (502) | `http://lucidhosting.xyz:82/sandriassoc@gmail.com/Sm8G4ddxoW/220452` |
| FDSN Sun | HTTP Error (502) | `http://lucidhosting.xyz:82/sandriassoc@gmail.com/Sm8G4ddxoW/222132` |
| FXX | HTTP Error (404) | `https://fl1.moveonjoy.com/FXX/index.m3u8` |
| Lifetime Movie Network | HTTP Error (404) | `https://fl1.moveonjoy.com/LIFETIME_MOVIE_NETWORK/index.m3u8` |
| NBC Sports Bay Area | Unknown status (302) | `http://hardcoremedia.xyz:80/NW3Vk7xXwW/8375773282/257216` |
| Paramount Network | HTTP Error (404) | `https://fl1.moveonjoy.com/PARAMOUNT_NETWORK/index.m3u8` |
| Premier Sports 2 | Unknown status (302) | `http://hardcoremedia.xyz:80/NW3Vk7xXwW/8375773282/117038` |
| NBC Sports Philadelphia | HTTP Error (403) | `http://hardcoremedia.xyz:80/NW3Vk7xXwW/8375773282/136477` |
| NFL RedZone | HTTP Error (502) | `http://hardcoremedia.xyz:80/NW3Vk7xXwW/8375773282/249239` |
| Premier Sports 2 | HTTP Error (502) | `http://hardcoremedia.xyz:80/NW3Vk7xXwW/8375773282/117038` |
| Sportsnet One | HTTP Error (403) | `http://mytvstream.net:8080/live/k4Svp2/645504/57297.m3u8` |
---
#### Base Channels URL