今天我在 vscode 的 jupyter Notebook 中运行我的 Python 脚本。程序崩溃了。之后我就完全无法运行这个 Notebook 了,因为我总是收到这个错误:“无法启动内核。无法启动内核...” 等待端口使用超时。
我已经重新安装了 jupyter、notebook 和 vscode。但我不知道现在该怎么办。
今天我在 vscode 的 jupyter Notebook 中运行我的 Python 脚本。程序崩溃了。之后我就完全无法运行这个 Notebook 了,因为我总是收到这个错误:“无法启动内核。无法启动内核...” 等待端口使用超时。
我已经重新安装了 jupyter、notebook 和 vscode。但我不知道现在该怎么办。
如何配置Docker来高效地开发两个依赖的Python仓库?
开发中:使用卷将框架安装到运行时以实现快速迭代(热重载)。
在生产中:构建一个不依赖于主机卷的干净运行时映像。
dev/
├── framework/
│ └── ...
└── runtime/
├── ...
└── Dockerfile
补充:依赖项的开发速度有限(例如,开源项目需要用户验收)。通过普通的 venv 对运行时进行单元测试pip install -e
是一个显而易见的选择——但必须等到依赖项发布后才能在任何 CICD 管道中安装。
如何在所有这些之前在本地运行 docker - 快速移动、打破常规并进行迭代?
我想让登录按钮在登录账户后消失,但似乎做不到。我试过使用 localStorage 和 sessionStorage,但都没用。我哪里做错了?
登录:
var objPeople = [{
username: "#",
password: "#"
},
{
username: "#",
password: "#"
},
{
username: "test",
password: "123"
}
]
function getInfo() {
var username = document.getElementById("username").value
var password = document.getElementById("password").value
for (var i = 0; i < objPeople.length; i++) {
if (username == objPeople[i].username && password == objPeople[i].password) {
console.log(username + "is logged in.")
window.location.href = "homepage.html"
localStorage.setItem("signedIn", 1);
}
}
console.log("Username/password is incorrect.")
}
主页:
const signedInTrue = localStorage.setItem("signedIn")
if (signedInTrue= 1) {
document.getElementById("topNav").style.display = "block"
}
以交互方式运行后sleep 10 &
立即关闭运行该命令的终端,该sleep
命令将在 10 秒之前终止。
但是当这个命令被放入脚本时:
# this is a file named testfile.sh
sleep 10 &
然后作为执行bash testfile.sh
,sleep
即使终端关闭后,命令仍会持续执行直至完成。
为什么在一种情况下sleep
终端关闭后立即停止,而在另一种情况下它继续执行?
在阅读了不少关于联合体用途和类型双关(基本上是不允许的,你应该依赖编译器优化 memcpy 调用)的文章后,我想知道以下联合体用法是否完全不安全,或者它是否完全是未定义行为,或者它实际上是否完全符合联合体规范且安全。其理念是,底层数据都属于相同的基本类型(int、double 等),但我只是想看看它是如何以不同的方式布局或命名的。我或许应该称之为“容器双关”,而不是“类型双关”。
// main.cpp
// compile with `c++ main.cpp -std=c+23`
#include <iostream>
#include <array>
#include <format>
struct Mat2 {
union {
std::array<float, 4> data;
std::array<std::array<float, 2>, 2> rows;
struct { float x, y, z, w; };
};
};
int main() {
// sometimes you want to think about it in rows
Mat2 m{ .rows = {{ {0, 1},
{2, 3} }}};
for (auto &row : m.rows) {
for (auto &col : row) {
std::cout << col << " ";
}
std::cout << "\n";
}
// sometimes you want to think about it as a block of data
std::cout << "\n";
for (auto &d : m.data) {
std::cout << d << " ";
}
std::cout << "\n\n";
// sometimes you want to access elements based on sematic names
std::cout << std::format("{}, {}, {}, {}", m.x, m.y, m.z, m.w);
std::cout << std::endl;
}
我在 MS Access 中有一个分割表单,其中我插入了多个命令按钮。
从左到右依次为:
我希望这些按钮能够执行与功能区中的按钮完全相同的操作。
我让 (A) 和 (C) 与以下 VB 一起工作:
Private Sub cmdClearAllFilter_Click()
DoCmd.RunCommand acCmdRemoveFilterSort
End Sub
Private Sub cmdFilterByForm_Click()
DoCmd.RunCommand acCmdFilterByForm
End Sub
但是我仍然对(B)和(D)感到困惑,我希望它们的行为就像单击下面标记的功能区中的按钮一样(最好使用DoCmd.RunCommand
自定义 VB):
对于(B),我尝试了:
Private Sub cmdFilterBySelection_Click()
DoCmd.RunCommand acCmdFilterBySelection
End Sub
我无法描述它的行为,因为我不完全理解它的作用。我只能得出一个结论:它的行为不像功能区按钮(下拉菜单在标记的单元格/字段/属性/值上提供 4 个筛选选项)。
对于(D),我尝试了:
Private Sub cmdToggleFilter_Click()
DoCmd.RunCommand acCmdToggleFilter
End Sub
这有两个问题:
On Error Resume Next
。任何帮助或指导都将不胜感激。
当我尝试从我的 CSV 文件中删除一行时,它会删除其余所有内容,然后将该行横向移动,并将字符分成单独的列。我不知道发生了什么。上面的图片显示了我尝试删除 CSV 文件的前后对比。我不知道发生了什么,这太奇怪了。这是我的代码,问题出在第 95 行:
import csv
import sys
FILENAME = "guests.csv"
def exit_program():
print("Terminating program.")
sys.exit()
def read_guests():
try:
guests = []
with open(FILENAME, newline="") as file:
reader = csv.reader(file)
for row in reader:
guests.append(row)
return guests
except FileNotFoundError as e:
## print(f"Could not find {FILENAME} file.")
## exit_program()
return guests
except Exception as e:
print(type(e), e)
exit_program()
def write_guests(guests):
try:
with open(FILENAME, "w", newline="") as file:
## raise BlockingIOError("Error raised for testing.")
writer = csv.writer(file)
writer.writerows(guests)
except OSError as e:
print(type(e), e)
exit_program()
except Exception as e:
print(type(e), e)
exit_program()
def list_guests(guests):
number_of_guests = 0
number_of_members = 0
total_fee = 0
for i, guests in enumerate(guests, start=1):
print(f"{i}. Name: {guests[0]} {guests[1]}\n Meal: {guests[2]} \n Guest Type: {guests[3]} \n Amount due: ${guests[4]}")
if guests[3] == "guest":
number_of_guests +=1
if guests[3] == "member":
number_of_members +=1
total_fee += 22
print("Number of members: " +str(number_of_members))
print("Number of guests: " +str(number_of_guests))
print("Total fee paid by all attendees: " +str(total_fee))
print()
def add_guests(guests):
fname = input("First name: ")
lname = input("Last name: ")
while True:
try:
meal = str(input("Meal(chicken, vegetarian, or beef): "))
except ValueError:
print("Please enter a meal. Please try again.")
continue
if meal == "beef" :
break
if meal == "chicken" :
break
if meal == "vegetarian" :
break
else:
print("Please enter a meal:(chicken, vegetarian, or beef)")
while True:
attendee_type = input("Are you a 'member' or 'guest'?")
if attendee_type == "member" :
break
if attendee_type == "guest" :
break
else:
print("Please enter either 'member' or 'guest': ")
fee = 22
guest = [fname, lname, meal, attendee_type, fee]
guests.append(guest)
write_guests(guests)
print(f"{fname} was added.\n")
def delete_guest(guests):
name = input("Enter the guest's first name: ")
for i, guests in enumerate(guests, start=1):
if name == guests[0]:
del guests[i]
write_guests(guests)
print(f"{name} removed from catalog.")
print("")
break
print(f"{name} doesn't exist in the list.")
def menu_report(guests):
number_of_beef = 0
number_of_chicken = 0
number_of_vegetarian = 0
for i, guests in enumerate(guests, start=1):
if guests[2] == "beef":
number_of_beef +=1
if guests[2] == "chicken":
number_of_chicken +=1
if guests[2] == "vegetarian":
number_of_vegetarian +=1
print("Number of Chicken entrees: " +str(number_of_chicken))
print("Number of Beef entrees: " +str(number_of_beef))
print("Number of vegetarian Meals: " +str(number_of_vegetarian))
print()
def display_menu():
print("COMMAND MENU")
print("list - List all guests")
print("add - Add a guest")
print("del - Delete a guest")
print("menu - Report menu items")
print("exit - Exit program")
print()
def main():
print("The Guests List program")
print("")
guests = read_guests()
while True:
display_menu()
command = input("Command: ")
if command.lower() == "list":
list_guests(guests)
elif command.lower() == "add":
add_guests(guests)
elif command.lower() == "del":
delete_guest(guests)
elif command.lower() == "menu":
menu_report(guests)
elif command.lower() == "exit":
break
else:
print("Not a valid command. Please try again.\n")
print("Bye!")
quit()
if __name__ == "__main__":
main()
这是明天截止的作业!arg!
我以为它会删除该行,但它只保存了该行,并将该行一列中的每个条目放入其所在行。原始行中的每个字符现在都被分成它自己的列。
给定一个极点数据框,我想提取所有重复的行,同时应用额外的过滤条件,例如:
import polars as pl
df = pl.DataFrame({
"name": ["Alice", "Bob", "Alice", "David", "Eve", "Bob", "Frank"],
"city": ["NY", "LA", "NY", "SF", "LA", "LA", "NY"],
"age": [25, 30, 25, 35, 28, 30, 40]
})
# Trying this:
df.filter((df.is_duplicated()) & (pl.col("city") == "NY")) # error
然而,这会导致错误:
SchemaError:无法将一系列类型解包
object
为bool
这暗示着df.is_duplicated()
返回一系列类型object
,但实际上,它是一个Boolean
系列。
令人惊讶的是,通过将表达式放在第一位来重新排序谓词可以使其起作用(但为什么呢?):
df.filter((pl.col("city") == "NY") & (df.is_duplicated())) # works!
正确输出:
shape: (2, 3)
┌───────┬──────┬─────┐
│ name ┆ city ┆ age │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞═══════╪══════╪═════╡
│ Alice ┆ NY ┆ 25 │
│ Alice ┆ NY ┆ 25 │
└───────┴──────┴─────┘
我理解,根据列的子集过滤重复项时的最佳方法是使用pl.struct
,例如:
df.filter((pl.struct(df.columns).is_duplicated()) & (pl.col("city") == "NY")) # works
它可以与附加过滤条件配合使用。
但是,我故意不使用pl.struct
,因为我的实际数据框有 40 列,并且我想根据除三列之外的所有列检查重复的行,因此我执行了以下操作:
df.filter(df.drop("col1", "col2", "col3").is_duplicated())
这可以正常工作,并且比将所有 37 列都写入 方便得多pl.struct
。但是,在右侧添加额外的过滤条件时,这种情况会中断,但在左侧不会中断:
df.filter(
(df.drop("col1", "col2", "col3").is_duplicated()) & (pl.col("col5") == "something")
) # breaks!
df.filter(
(pl.col("col5") == "something") & (df.drop("col1", "col2", "col3").is_duplicated())
) # works!
在这种情况下,为什么谓词的顺序(序列与表达式 vs. 表达式与序列)很重要.filter()
?这是 Polars 的预期行为,还是一个 bug?
我已经设置了大页面,创建了容器并将 dpdk 应用程序部署到容器中,但我的应用程序返回 0 rte_eth_dev_count_avail
,我可能错过了什么
/usr/bin/fwdd -l 0-3 -n 4 --vdev=net_tap0,iface=eth0
function setup_hugepages()
{
echo "Setup hugepages"
sysctl -w vm.nr_hugepages=1024
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
MOUNT_POINT="/mnt/huge"
if [ ! $(mountpoint -q "${MOUNT_POINT}") ]; then
echo "Mounting hugepages"
mkdir -p ${MOUNT_POINT}
mount -t hugetlbfs nodev ${MOUNT_POINT}
else
echo "Hugepages are already mounted at ${MOUNT_POINT}"
fi
}
docker run -itd --privileged --cap-add=ALL \
-v /sys/bus/pci/devices:/sys/bus/pci/devices \
-v /sys/kernel/mm/hugepages:/sys/kernel/mm/hugepages \
-v /sys/devices/system/node:/sys/devices/system/node \
-v /dev:/dev \
-v /mnt/huge:/mnt/huge \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
--name nstk nstk_image
int main(int argc, char* argv[])
{
int ret = rte_eal_init(argc, argv);
if (ret < 0) {
NSTK_LOG_DEBUG("Error: EAL initialization failed");
return EXIT_FAILURE;
}
if (rte_eth_dev_count_avail() == 0) {
NSTK_LOG_DEBUG("Error: No available Ethernet ports");
return EXIT_FAILURE;
}
...
请考虑以下代码:
import pandas as pd
from sklearn.model_selection import train_test_split
# step 1
ids = list(range(1000))
label = 500 * [1.0] + 500 * [0.0]
df = pd.DataFrame({"id": ids, "label": label})
# step 2
train_p = 0.8
val_p = 0.1
test_p = 0.1
# step 3
n_train = int(len(df) * train_p)
n_val = int(len(df) * val_p)
n_test = len(df) - n_train - n_val
print("* Step 3")
print("train:", n_train)
print("val:", n_val)
print("test:", n_test)
print()
# step 4
train_ids, test_ids = train_test_split(df["id"], stratify=df.label, test_size=n_test, random_state=42)
# step 5
print("* Step 5. First split")
print( df.loc[df.id.isin(train_ids), "label"].value_counts() )
print( df.loc[df.id.isin(test_ids), "label"].value_counts() )
print()
# step 6
train_ids, val_ids = train_test_split(train_ids, stratify=df.loc[df.id.isin(train_ids), "label"], test_size=n_val, random_state=42)
# step 7
train_df = df[df["id"].isin(train_ids)]
val_df = df[df["id"].isin(val_ids)]
test_df = df[df["id"].isin(test_ids)]
# step 8
print("* Step 8. Final split")
print("train:", train_df["label"].value_counts())
print("val:", val_df["label"].value_counts())
print("test:", test_df["label"].value_counts())
输出:
* Step 3
train: 800
val: 100
test: 100
* Step 5. First split
label
1.0 450
0.0 450
Name: count, dtype: int64
label
1.0 50
0.0 50
Name: count, dtype: int64
* Step 8. Final split
train: label
0.0 404
1.0 396
Name: count, dtype: int64
val: label
1.0 54
0.0 46
Name: count, dtype: int64
test: label
1.0 50
0.0 50
Name: count, dtype: int64
label
。label
。如您所见,步骤 6 中的第二次拆分并未产生平衡的拆分(统计数据打印于步骤 8)。第一次拆分后,示例(步骤 5 的输出)仍然是平衡的,因此可以进行第二次拆分以保持完美的类别平衡。
我做错了什么?