Re: [PATCH v3 10/10] x86/split_lock: Handle #AC exception for split lock

From: kbuild test robot
Date: Mon Feb 04 2019 - 06:01:01 EST


Hi Fenghua,

I love your patch! Yet something to improve:

[auto build test ERROR on tip/auto-latest]
[also build test ERROR on v5.0-rc4 next-20190204]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url: https://github.com/0day-ci/linux/commits/Fenghua-Yu/x86-split_lock-Enable-AC-exception-for-split-locked-accesses/20190204-162843
config: i386-randconfig-a2-02040849 (attached as .config)
compiler: gcc-4.9 (Debian 4.9.4-2) 4.9.4
reproduce:
# save the attached .config to linux build tree
make ARCH=i386

All errors (new ones prefixed by >>):

ld: arch/x86/kernel/traps.o: in function `do_alignment_check':
arch/x86/kernel/traps.c:310: undefined reference to `do_ac_split_lock'
ld: arch/x86/kernel/setup.o: in function `setup_arch':
>> arch/x86/kernel/setup.c:965: undefined reference to `set_ac_split_lock'

vim +965 arch/x86/kernel/setup.c

949
950 strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
951 *cmdline_p = command_line;
952
953 /*
954 * x86_configure_nx() is called before parse_early_param() to detect
955 * whether hardware doesn't support NX (so that the early EHCI debug
956 * console setup can safely call set_fixmap()). It may then be called
957 * again from within noexec_setup() during parsing early parameters
958 * to honor the respective command line option.
959 */
960 x86_configure_nx();
961
962 parse_early_param();
963
964 /* Set up #AC for split lock at the earliest phase. */
> 965 set_ac_split_lock();
966
967 if (efi_enabled(EFI_BOOT))
968 efi_memblock_x86_reserve_range();
969 #ifdef CONFIG_MEMORY_HOTPLUG
970 /*
971 * Memory used by the kernel cannot be hot-removed because Linux
972 * cannot migrate the kernel pages. When memory hotplug is
973 * enabled, we should prevent memblock from allocating memory
974 * for the kernel.
975 *
976 * ACPI SRAT records all hotpluggable memory ranges. But before
977 * SRAT is parsed, we don't know about it.
978 *
979 * The kernel image is loaded into memory at very early time. We
980 * cannot prevent this anyway. So on NUMA system, we set any
981 * node the kernel resides in as un-hotpluggable.
982 *
983 * Since on modern servers, one node could have double-digit
984 * gigabytes memory, we can assume the memory around the kernel
985 * image is also un-hotpluggable. So before SRAT is parsed, just
986 * allocate memory near the kernel image to try the best to keep
987 * the kernel away from hotpluggable memory.
988 */
989 if (movable_node_is_enabled())
990 memblock_set_bottom_up(true);
991 #endif
992
993 x86_report_nx();
994
995 /* after early param, so could get panic from serial */
996 memblock_x86_reserve_range_setup_data();
997
998 if (acpi_mps_check()) {
999 #ifdef CONFIG_X86_LOCAL_APIC
1000 disable_apic = 1;
1001 #endif
1002 setup_clear_cpu_cap(X86_FEATURE_APIC);
1003 }
1004
1005 e820__reserve_setup_data();
1006 e820__finish_early_params();
1007
1008 if (efi_enabled(EFI_BOOT))
1009 efi_init();
1010
1011 dmi_scan_machine();
1012 dmi_memdev_walk();
1013 dmi_set_dump_stack_arch_desc();
1014
1015 /*
1016 * VMware detection requires dmi to be available, so this
1017 * needs to be done after dmi_scan_machine(), for the boot CPU.
1018 */
1019 init_hypervisor_platform();
1020
1021 tsc_early_init();
1022 x86_init.resources.probe_roms();
1023
1024 /* after parse_early_param, so could debug it */
1025 insert_resource(&iomem_resource, &code_resource);
1026 insert_resource(&iomem_resource, &data_resource);
1027 insert_resource(&iomem_resource, &bss_resource);
1028
1029 e820_add_kernel_range();
1030 trim_bios_range();
1031 #ifdef CONFIG_X86_32
1032 if (ppro_with_ram_bug()) {
1033 e820__range_update(0x70000000ULL, 0x40000ULL, E820_TYPE_RAM,
1034 E820_TYPE_RESERVED);
1035 e820__update_table(e820_table);
1036 printk(KERN_INFO "fixed physical RAM map:\n");
1037 e820__print_table("bad_ppro");
1038 }
1039 #else
1040 early_gart_iommu_check();
1041 #endif
1042
1043 /*
1044 * partially used pages are not usable - thus
1045 * we are rounding upwards:
1046 */
1047 max_pfn = e820__end_of_ram_pfn();
1048
1049 /* update e820 for memory not covered by WB MTRRs */
1050 mtrr_bp_init();
1051 if (mtrr_trim_uncached_memory(max_pfn))
1052 max_pfn = e820__end_of_ram_pfn();
1053
1054 max_possible_pfn = max_pfn;
1055
1056 /*
1057 * This call is required when the CPU does not support PAT. If
1058 * mtrr_bp_init() invoked it already via pat_init() the call has no
1059 * effect.
1060 */
1061 init_cache_modes();
1062
1063 /*
1064 * Define random base addresses for memory sections after max_pfn is
1065 * defined and before each memory section base is used.
1066 */
1067 kernel_randomize_memory();
1068

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation

Attachment: .config.gz
Description: application/gzip