Avatar

SpiderUnderUrBed

SpiderUnderUrBed@lemmy.zip
Joined
16 posts • 21 comments
Direct message

How did they get the rope around him lol

permalink
report
reply

Weird thing is, is that. I only had one mscp in my app state, it was behind a arc mutex and code that accessed it was running asynchronously, but, somehow they all got the same messages for a bit, then like, stopped or got very partial messages except the intended recipient (they got the full message). Is this some memory issue, or race condition?

I don’t have this issue switching to broadcast but I’m confused

permalink
report
parent
reply

Nevermind, fixed, this is what I tried applying, or maybe i should have waited for a bit and it might of worked, regardless, just incase its useful to anyone:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
        }
        hosts /etc/coredns/NodeHosts {
            ttl 60
            reload 15s
            fallthrough
        }
        prometheus :9153
        forward . 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4
        cache 30
        loop
        reload
        loadbalance
    }

The issue is solved now, thanks

permalink
report
parent
reply

Ok so, I think it was running on the wrong node and using thats resolv.conf which I did not update, but I am getting a new issue:

2025-05-02T21:42:30Z INF Starting tunnel tunnelID=72c14e86-612a-46a7-a80f-14cfac1f0764
2025-05-02T21:42:30Z INF Version 2025.4.2 (Checksum b1ac33cda3705e8bac2c627dfd95070cb6811024e7263d4a554060d3d8561b33)
2025-05-02T21:42:30Z INF GOOS: linux, GOVersion: go1.22.5-devel-cf, GoArch: arm64
2025-05-02T21:42:30Z INF Settings: map[no-autoupdate:true]
2025-05-02T21:42:30Z INF Environmental variables map[TUNNEL_TOKEN:*****]
2025-05-02T21:42:30Z INF Generated Connector ID: 7679bafd-f44f-41de-ab1e-96f90aa9cc34
2025-05-02T21:42:40Z ERR Failed to fetch features, default to disable error="lookup cfd-features.argotunnel.com on 10.90.0.10:53: dial udp 10.90.0.10:53: i/o timeout"
2025-05-02T21:43:30Z WRN Unable to lookup protocol percentage.
2025-05-02T21:43:30Z INF Initial protocol quic
2025-05-02T21:43:30Z INF ICMP proxy will use 10.60.0.194 as source for IPv4
2025-05-02T21:43:30Z INF ICMP proxy will use fe80::eca8:3eff:fef1:c964 in zone eth0 as source for IPv6

2025-05-02T21:42:40Z ERR Failed to fetch features, default to disable error="lookup cfd-features.argotunnel.com on 10.90.0.10:53: dial udp 10.90.0.10:53: i/o timeout"

kube-dns usually isnt supposed to give a i/o timeout when going to external domains, im pretty sure its supposed to forward it to another dns server, or do i have to configure that?

permalink
report
parent
reply

??? He said he talked to the principal multiple times

permalink
report
parent
reply

I’ll go there, i don’t constantly post questions but I’ve just been recently having alot of issues with CNI’s, I might just delete this post

permalink
report
parent
reply

I increase envoys memory requirement to 1gb, did not fix the issue.

permalink
report
parent
reply

Well I switched to cilium, same issue, and the reason I started using a CNI earlier than I intended was because flannel didn’t work.

This issue might seem complex but could you tell me some debugging stuff and logs to try to maybe get to the source of the issue or atleast provide a way to reproduce my issue (so I could maybe file a bug report)

permalink
report
parent
reply
spiderunderurbed@raspberrypi:~/k8s $ kubectl get networkpolicy -A
No resources found
spiderunderurbed@raspberrypi:~/k8s $ 

No networkpolicies.

spiderunderurbed@raspberrypi:~/k8s $ kubectl get pods -A | grep -i dns
default                      pdns-admin-mysql-854c4f79d9-wsclq                         1/1     Running            1 (2d22h ago)    4d9h
default                      pdns-mysql-master-6cddc8cd54-cgbs9                        1/1     Running            0                7h49m
kube-system                  coredns-ff8999cc5-hchq6                                   1/1     Running            1 (2d22h ago)    4d11h
kube-system                  svclb-pdns-mysql-master-1993c118-8xqzh                    3/3     Running            0                4d
kube-system                  svclb-pdns-mysql-master-1993c118-whf5g                    3/3     Running            0                124m
spiderunderurbed@raspberrypi:~/k8s $ 

Ignore powerdns, its just extra stuff, but yeah coredns is running

spiderunderurbed@raspberrypi:~/k8s $  kubectl get endpoints  -n kube-system
NAME             ENDPOINTS                                              AGE
kube-dns         172.16.246.61:53,172.16.246.61:53,172.16.246.61:9153   4d11h
metrics-server   172.16.246.45:10250                                    4d11h
traefik          <none>                                                 130m
spiderunderurbed@raspberrypi:~/k8s $ 

^ endpoints and services:

spiderunderurbed@raspberrypi:~/k8s $ kubectl get svc -n kube-system
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
kube-dns         ClusterIP      10.43.0.10      <none>        53/UDP,53/TCP,9153/TCP       4d11h
metrics-server   ClusterIP      10.43.67.112    <none>        443/TCP                      4d11h
traefik          LoadBalancer   10.43.116.221   <pending>     80:31123/TCP,443:30651/TCP   131m
spiderunderurbed@raspberrypi:~/k8s $ 
permalink
report
parent
reply