V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
V2EX  ›  zhoudaiyu  ›  全部回复第 1 页 / 共 77 页
回复总数  1522
1  2  3  4  5  6  7  8  9  10 ... 77  
34 天前
回复了 snuglove 创建的主题 DevOps 各位 Linux 运维的巨佬工作用 mac 吗
@wymisgod transmit
@LavaC #1 多卡能刺激部分极致的玩家付费吧
@yihaomizhijia #2 多卡互联不能通过驱动层面去屏蔽么?
@xtreme1 #3 同步地狱指的是?感觉如何开放互联了还是有一些有钱的人会搞多卡吧,比如追求高分下的高帧率?
@grc19900 #4 效率低指的是 2 * 100% 只能达到 150%的样子?
49 天前
回复了 joetao123 创建的主题 Linux 华为欧拉跟 centos 的差异?
大概和 CentOS8 查不了太多,我们用的基于 openEuler 的麒麟 v10 sp2 和 sp3 ,有的 yum 源需要将系统强制改为 Centos8 才能安装成,否则识别不了
59 天前
回复了 Legman 创建的主题 Kubernetes k8s 集群节点使用什么 Linux 发行版
@kd9yYw2RyhQwAwzn #34 麒麟 V10 sp2 ( ARM ) sp3 ( C86 )
62 天前
回复了 zhoudaiyu 创建的主题 Kubernetes 大家生产 K8s 集群都用的啥 CPU 呢?
@ljian6530 #2 也是私有化部署么
@virusdefender #3 我们是搞新创,如果非想用 intel ,只能用旧机器,但是海光确实性能一般
62 天前
回复了 awesomePower 创建的主题 程序员 有没有 iptables 的博客或者教程
@yangg #1 重启过电脑 1 次,重启过 chrome n 次,还这样
@CodeAllen 其实我比较好奇为啥一定要插满了才能发挥全部性能,规格一致可以理解
请教一下大家,海光的 2 * 7360 和 2 * 7375 大概相当于 AMD 、INTEL 的啥型号 CPU 的性能呀?还有最近和厂商沟通:如果是 1 台服务器装了 2 颗海光的 CPU ,那么需要将主板的所有内存条插满同规格的内存,否则应用的性能会打折扣,特别是内存密集型的应用。这个是为什么?忽悠我们么?
86 天前
回复了 meitounaoba 创建的主题 酷工作 网络层协议开发工程师
刑,非常可铐
@AOIO7t
@luomao
@smartruid
@hahasong
@ming1455
@GensKinsey
@Pinealxx408
@YsHaNg 谢谢大家,继续用 airpods3 和 AirPods pro1 了,不买立省 1899 🐶
112 天前
回复了 BenchWidth 创建的主题 Kubernetes K8S 集群遇到负载不均衡的问题。
用 podantiaffnity 试试
@blackeeper 试了这个倒是没事儿
@mdeche101644 没有产生,只有内核崩了才有我记得
@msg7086 #17 oracle 的,看上去可能稳定一些,我调研一下
@ruidoBlanco #19 已经是 1160 了,是红帽 7 的最新的内核了,再升就得用 elrepo 或者楼上说的 uek 内核了,领导估计不大支持升级
hang 住时候的内核日志截取了部分
Aug 15 09:33:53 node16 kernel: INFO: task jbd2/dm-2-8:1839 blocked for more than 120 seconds.
Aug 15 09:33:53 node16 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 09:33:53 node16 kernel: jbd2/dm-2-8 D ffff8e7efea1acc0 0 1839 2 0x00000000
Aug 15 09:33:53 node16 kernel: Call Trace:
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8d2ba0>] ? task_rq_unlock+0x20/0x20
Aug 15 09:33:53 node16 kernel: [<ffffffff8ef87169>] schedule+0x29/0x70
Aug 15 09:33:53 node16 kernel: [<ffffffffc118433c>] jbd2_journal_commit_transaction+0x23c/0x19c0 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8e111e>] ? account_entity_dequeue+0xae/0xd0
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8e48bc>] ? dequeue_entity+0x11c/0x5c0
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8e5ec1>] ? put_prev_entity+0x31/0x400
Aug 15 09:33:53 node16 kernel: [<ffffffff8e82b59e>] ? __switch_to+0xce/0x580
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c6d10>] ? wake_up_atomic_t+0x30/0x30
Aug 15 09:33:53 node16 kernel: [<ffffffff8ef86c8f>] ? __schedule+0x3af/0x860
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8adf0e>] ? try_to_del_timer_sync+0x5e/0x90
Aug 15 09:33:53 node16 kernel: [<ffffffffc118af89>] kjournald2+0xc9/0x260 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c6d10>] ? wake_up_atomic_t+0x30/0x30
Aug 15 09:33:53 node16 kernel: [<ffffffffc118aec0>] ? commit_timeout+0x10/0x10 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c5c21>] kthread+0xd1/0xe0
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c5b50>] ? insert_kthread_work+0x40/0x40
Aug 15 09:33:53 node16 kernel: [<ffffffff8ef93ddd>] ret_from_fork_nospec_begin+0x7/0x21
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c5b50>] ? insert_kthread_work+0x40/0x40
Aug 15 09:33:53 node16 kernel: INFO: task containerd:225811 blocked for more than 120 seconds.
Aug 15 09:33:53 node16 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 09:33:53 node16 kernel: containerd D ffff8e7efef1acc0 0 225811 1 0x00000080
Aug 15 09:33:53 node16 kernel: Call Trace:
Aug 15 09:33:53 node16 kernel: [<ffffffff8ef87169>] schedule+0x29/0x70
Aug 15 09:33:53 node16 kernel: [<ffffffffc1181085>] wait_transaction_locked+0x85/0xd0 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c6d10>] ? wake_up_atomic_t+0x30/0x30
Aug 15 09:33:53 node16 kernel: [<ffffffffc1181378>] add_transaction_credits+0x278/0x310 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea2830b>] ? __kmalloc+0x1eb/0x230
Aug 15 09:33:53 node16 kernel: [<ffffffffc11dd8c4>] ? ext4_htree_store_dirent+0x34/0x120 [ext4]
Aug 15 09:33:53 node16 kernel: [<ffffffffc1181601>] start_this_handle+0x1a1/0x430 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea287c2>] ? kmem_cache_alloc+0x1c2/0x1f0
Aug 15 09:33:53 node16 kernel: [<ffffffffc1181ab3>] jbd2__journal_start+0xf3/0x1f0 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffffc11eb0ba>] ? ext4_dirty_inode+0x2a/0x60 [ext4]
Aug 15 09:33:53 node16 kernel: [<ffffffffc1217759>] __ext4_journal_start_sb+0x69/0xe0 [ext4]
Aug 15 09:33:53 node16 kernel: [<ffffffffc11eb0ba>] ext4_dirty_inode+0x2a/0x60 [ext4]
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea7f00d>] __mark_inode_dirty+0x15d/0x270
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea6b8e9>] update_time+0x89/0xd0
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea6bdfa>] touch_atime+0x10a/0x220
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea63694>] iterate_dir+0xe4/0x130
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea63c8c>] SyS_getdents64+0x9c/0x120
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea63900>] ? fillonedir+0x110/0x110
Aug 15 09:33:53 node16 kernel: [<ffffffff8ef93f92>] system_call_fastpath+0x25/0x2a
Aug 15 09:33:53 node16 kernel: INFO: task containerd:2700571 blocked for more than 120 seconds.
Aug 15 09:33:53 node16 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 09:33:53 node16 kernel: containerd D ffff8e3eff79acc0 0 2700571 1 0x00000080
Aug 15 09:33:53 node16 kernel: Call Trace:
Aug 15 09:33:53 node16 kernel: [<ffffffff8ef87169>] schedule+0x29/0x70
Aug 15 09:33:53 node16 kernel: [<ffffffffc1181085>] wait_transaction_locked+0x85/0xd0 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c6d10>] ? wake_up_atomic_t+0x30/0x30
Aug 15 09:33:53 node16 kernel: [<ffffffffc1181378>] add_transaction_credits+0x278/0x310 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffff8ef86c8f>] ? __schedule+0x3af/0x860
Aug 15 09:33:53 node16 kernel: [<ffffffffc1181601>] start_this_handle+0x1a1/0x430 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffff8ef87169>] ? schedule+0x29/0x70
Aug 15 09:33:53 node16 kernel: [<ffffffff8ef84c51>] ? schedule_timeout+0x221/0x2d0
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea287c2>] ? kmem_cache_alloc+0x1c2/0x1f0
Aug 15 09:33:53 node16 kernel: [<ffffffffc1181ab3>] jbd2__journal_start+0xf3/0x1f0 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffffc11eb0ba>] ? ext4_dirty_inode+0x2a/0x60 [ext4]
Aug 15 09:33:53 node16 kernel: [<ffffffffc1217759>] __ext4_journal_start_sb+0x69/0xe0 [ext4]
Aug 15 09:33:53 node16 kernel: [<ffffffffc11eb0ba>] ext4_dirty_inode+0x2a/0x60 [ext4]
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea7f00d>] __mark_inode_dirty+0x15d/0x270
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea6b8e9>] update_time+0x89/0xd0
Aug 15 09:33:53 node16 kernel: [<ffffffff8eb8cfe4>] ? __radix_tree_lookup+0x84/0xf0
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea6b9d0>] file_update_time+0xa0/0xf0
Aug 15 09:33:53 node16 kernel: [<ffffffff8e9c03d8>] __generic_file_aio_write+0x198/0x400
Aug 15 09:33:53 node16 kernel: [<ffffffff8e9c0699>] generic_file_aio_write+0x59/0xa0
Aug 15 09:33:53 node16 kernel: [<ffffffffc11de5c8>] ext4_file_write+0x348/0x600 [ext4]
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea010bc>] ? page_add_file_rmap+0x8c/0xc0
Aug 15 09:33:53 node16 kernel: [<ffffffff8e9f339e>] ? do_numa_page+0x1be/0x250
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea4d063>] do_sync_write+0x93/0xe0
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea4db50>] vfs_write+0xc0/0x1f0
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea4eaf2>] SyS_pwrite64+0x92/0xc0
Aug 15 09:33:53 node16 kernel: [<ffffffff8ef93f92>] system_call_fastpath+0x25/0x2a
Aug 15 09:33:53 node16 kernel: INFO: task dcgm-exporter:68381 blocked for more than 120 seconds.
Aug 15 09:33:53 node16 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 09:33:53 node16 kernel: dcgm-exporter D ffff8e3eff81acc0 0 68381 57193 0x00000080
Aug 15 09:33:53 node16 kernel: Call Trace:
Aug 15 09:33:53 node16 kernel: [<ffffffff8ef87169>] schedule+0x29/0x70
Aug 15 09:33:53 node16 kernel: [<ffffffffc1181085>] wait_transaction_locked+0x85/0xd0 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8c6d10>] ? wake_up_atomic_t+0x30/0x30
Aug 15 09:33:53 node16 kernel: [<ffffffffc1181378>] add_transaction_credits+0x278/0x310 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffffc1181601>] start_this_handle+0x1a1/0x430 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffff8e8e8a43>] ? load_balance+0x1a3/0xa10
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea287c2>] ? kmem_cache_alloc+0x1c2/0x1f0
Aug 15 09:33:53 node16 kernel: [<ffffffffc1181ab3>] jbd2__journal_start+0xf3/0x1f0 [jbd2]
Aug 15 09:33:53 node16 kernel: [<ffffffffc11eb0ba>] ? ext4_dirty_inode+0x2a/0x60 [ext4]
Aug 15 09:33:53 node16 kernel: [<ffffffffc1217759>] __ext4_journal_start_sb+0x69/0xe0 [ext4]
Aug 15 09:33:53 node16 kernel: [<ffffffffc11eb0ba>] ext4_dirty_inode+0x2a/0x60 [ext4]
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea7f00d>] __mark_inode_dirty+0x15d/0x270
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea6b8e9>] update_time+0x89/0xd0
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea6b9d0>] file_update_time+0xa0/0xf0
Aug 15 09:33:53 node16 kernel: [<ffffffff8e9c03d8>] __generic_file_aio_write+0x198/0x400
Aug 15 09:33:53 node16 kernel: [<ffffffff8e9c0699>] generic_file_aio_write+0x59/0xa0
Aug 15 09:33:53 node16 kernel: [<ffffffffc11de5c8>] ext4_file_write+0x348/0x600 [ext4]
Aug 15 09:33:53 node16 kernel: [<ffffffff8e9f339e>] ? do_numa_page+0x1be/0x250
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea4d063>] do_sync_write+0x93/0xe0
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea4db50>] vfs_write+0xc0/0x1f0
Aug 15 09:33:53 node16 kernel: [<ffffffff8ea4e92f>] SyS_write+0x7f/0xf0
Aug 15 09:33:53 node16 kernel: [<ffffffff8ef93f92>] system_call_fastpath+0x25/0x2a
Aug 15 09:33:53 node16 kernel: INFO: task gcs_server:21032 blocked for more than 120 seconds.
@ruidoBlanco 一会就补上老哥
@lrvy 一言难尽,反正就是能用红帽的包,但是人家不管我们出的问题
@Hormazed #8 生产系统不是非常敢用 elrepo 提供的内核,尽管我也想用新的
@Hormazed #9 我们的内核版本已经比这个新了,是 1160
@JackSlowFcck #11 周期太长了,机器托管到别的机房了,现在有的业务是单点,只能先再用上了
@Kumo31 #10 大概率我们要 3.10 一直用上去了,我们还有信创的机器是 4.19 的内核,也反馈过 bug ,厂商也没管
@hefish #5 只是怀疑,因为 reboot 了就好了
@liuchao719 #6 没人敢用新系统,没人背得起锅,大家都用老的
@iyiluo #1 当时看带外硬盘信息是正常的,而且 reboot 后恢复了,运行 1 天了也正常了
@barrysj #2 有,从监控数据看到当时有一块盘的 I/O 掉下去了,突然没有读/写了,进程也都 hung 了,iowait 当时没有
硬盘是 3 块 SAS 的 SSD 组的 RAID
1  2  3  4  5  6  7  8  9  10 ... 77  
关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   实用小工具   ·   2720 人在线   最高记录 6679   ·     Select Language
创意工作者们的社区
World is powered by solitude
VERSION: 3.9.8.5 · 33ms · UTC 00:19 · PVG 08:19 · LAX 16:19 · JFK 19:19
Developed with CodeLauncher
♥ Do have faith in what you're doing.