Commit 5326695a by Andrew Stubbs Committed by Andrew Stubbs

GCN back-end code

This patch contains the major part of the GCN back-end.  The machine
description has been broken out to avoid the mailing list size limit.

The back-end contains various bits that support OpenACC and OpenMP, but the
middle-end and libgomp patches are missing, as is mkoffload.  I include them
here because they're harmless and carving up the files seems like unnecessary
effort.  The remaining offload support will be posted at a later date.

The gcn-run.c is a separate tool that can run a GCN program on a GPU using
the ROCm drivers and HSA runtime libraries.

2019-01-17  Andrew Stubbs  <ams@codesourcery.com>
	    Kwok Cheung Yeung  <kcy@codesourcery.com>
	    Julian Brown  <julian@codesourcery.com>
	    Tom de Vries  <tom@codesourcery.com>
	    Jan Hubicka  <hubicka@ucw.cz>
	    Martin Jambor  <mjambor@suse.cz>

	gcc/
	* common/config/gcn/gcn-common.c: New file.
	* config/gcn/driver-gcn.c: New file.
	* config/gcn/gcn-builtins.def: New file.
	* config/gcn/gcn-hsa.h: New file.
	* config/gcn/gcn-modes.def: New file.
	* config/gcn/gcn-opts.h: New file.
	* config/gcn/gcn-passes.def: New file.
	* config/gcn/gcn-protos.h: New file.
	* config/gcn/gcn-run.c: New file.
	* config/gcn/gcn-tree.c: New file.
	* config/gcn/gcn.c: New file.
	* config/gcn/gcn.h: New file.
	* config/gcn/gcn.opt: New file.
	* config/gcn/t-gcn-hsa: New file.


Co-Authored-By: Jan Hubicka <hubicka@ucw.cz>
Co-Authored-By: Julian Brown <julian@codesourcery.com>
Co-Authored-By: Kwok Cheung Yeung <kcy@codesourcery.com>
Co-Authored-By: Martin Jambor <mjambor@suse.cz>
Co-Authored-By: Tom de Vries <tom@codesourcery.com>

From-SVN: r268023
parent 3d6275e3
...@@ -5,6 +5,28 @@ ...@@ -5,6 +5,28 @@
Jan Hubicka <hubicka@ucw.cz> Jan Hubicka <hubicka@ucw.cz>
Martin Jambor <mjambor@suse.cz> Martin Jambor <mjambor@suse.cz>
* common/config/gcn/gcn-common.c: New file.
* config/gcn/driver-gcn.c: New file.
* config/gcn/gcn-builtins.def: New file.
* config/gcn/gcn-hsa.h: New file.
* config/gcn/gcn-modes.def: New file.
* config/gcn/gcn-opts.h: New file.
* config/gcn/gcn-passes.def: New file.
* config/gcn/gcn-protos.h: New file.
* config/gcn/gcn-run.c: New file.
* config/gcn/gcn-tree.c: New file.
* config/gcn/gcn.c: New file.
* config/gcn/gcn.h: New file.
* config/gcn/gcn.opt: New file.
* config/gcn/t-gcn-hsa: New file.
2019-01-17 Andrew Stubbs <ams@codesourcery.com>
Kwok Cheung Yeung <kcy@codesourcery.com>
Julian Brown <julian@codesourcery.com>
Tom de Vries <tom@codesourcery.com>
Jan Hubicka <hubicka@ucw.cz>
Martin Jambor <mjambor@suse.cz>
* config/gcn/constraints.md: New file. * config/gcn/constraints.md: New file.
* config/gcn/gcn-valu.md: New file. * config/gcn/gcn-valu.md: New file.
* config/gcn/gcn.md: New file. * config/gcn/gcn.md: New file.
......
/* Common hooks for GCN
Copyright (C) 2016-2019 Free Software Foundation, Inc.
This file is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3 of the License, or (at your option)
any later version.
This file is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
#include "config.h"
#include "system.h"
#include "coretypes.h"
#include "tm.h"
#include "common/common-target.h"
#include "common/common-target-def.h"
#include "opts.h"
#include "flags.h"
#include "params.h"
/* Set default optimization options. */
static const struct default_options gcn_option_optimization_table[] =
{
{ OPT_LEVELS_1_PLUS, OPT_fomit_frame_pointer, NULL, 1 },
{ OPT_LEVELS_NONE, 0, NULL, 0 }
};
#undef TARGET_OPTION_OPTIMIZATION_TABLE
#define TARGET_OPTION_OPTIMIZATION_TABLE gcn_option_optimization_table
struct gcc_targetm_common targetm_common = TARGETM_COMMON_INITIALIZER;
/* Subroutines for the gcc driver.
Copyright (C) 2018-2019 Free Software Foundation, Inc.
This file is part of GCC.
GCC is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3, or (at your option)
any later version.
GCC is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
#include "config.h"
#include "system.h"
#include "coretypes.h"
#include "tm.h"
const char *
last_arg_spec_function (int argc, const char **argv)
{
if (argc == 0)
return NULL;
return argv[argc-1];
}
/* Copyright (C) 2016-2019 Free Software Foundation, Inc.
This file is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3 of the License, or (at your option)
any later version.
This file is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
/* The first argument to these macros is the return type of the builtin,
the rest are arguments of the builtin. */
#define _A1(a) {a, GCN_BTI_END_OF_PARAMS}
#define _A2(a,b) {a, b, GCN_BTI_END_OF_PARAMS}
#define _A3(a,b,c) {a, b, c, GCN_BTI_END_OF_PARAMS}
#define _A4(a,b,c,d) {a, b, c, d, GCN_BTI_END_OF_PARAMS}
#define _A5(a,b,c,d,e) {a, b, c, d, e, GCN_BTI_END_OF_PARAMS}
DEF_BUILTIN (FLAT_LOAD_INT32, 1 /*CODE_FOR_flat_load_v64si*/,
"flat_load_int32", B_INSN,
_A3 (GCN_BTI_V64SI, GCN_BTI_EXEC, GCN_BTI_V64SI),
gcn_expand_builtin_1)
DEF_BUILTIN (FLAT_LOAD_PTR_INT32, 2 /*CODE_FOR_flat_load_ptr_v64si */,
"flat_load_ptr_int32", B_INSN,
_A4 (GCN_BTI_V64SI, GCN_BTI_EXEC, GCN_BTI_SIPTR, GCN_BTI_V64SI),
gcn_expand_builtin_1)
DEF_BUILTIN (FLAT_STORE_PTR_INT32, 3 /*CODE_FOR_flat_store_ptr_v64si */,
"flat_store_ptr_int32", B_INSN,
_A5 (GCN_BTI_VOID, GCN_BTI_EXEC, GCN_BTI_SIPTR, GCN_BTI_V64SI,
GCN_BTI_V64SI),
gcn_expand_builtin_1)
DEF_BUILTIN (FLAT_LOAD_PTR_FLOAT, 2 /*CODE_FOR_flat_load_ptr_v64sf */,
"flat_load_ptr_float", B_INSN,
_A4 (GCN_BTI_V64SF, GCN_BTI_EXEC, GCN_BTI_SFPTR, GCN_BTI_V64SI),
gcn_expand_builtin_1)
DEF_BUILTIN (FLAT_STORE_PTR_FLOAT, 3 /*CODE_FOR_flat_store_ptr_v64sf */,
"flat_store_ptr_float", B_INSN,
_A5 (GCN_BTI_VOID, GCN_BTI_EXEC, GCN_BTI_SFPTR, GCN_BTI_V64SI,
GCN_BTI_V64SF),
gcn_expand_builtin_1)
DEF_BUILTIN (SQRTVF, 3 /*CODE_FOR_sqrtvf */,
"sqrtvf", B_INSN,
_A2 (GCN_BTI_V64SF, GCN_BTI_V64SF),
gcn_expand_builtin_1)
DEF_BUILTIN (SQRTF, 3 /*CODE_FOR_sqrtf */,
"sqrtf", B_INSN,
_A2 (GCN_BTI_SF, GCN_BTI_SF),
gcn_expand_builtin_1)
DEF_BUILTIN (CMP_SWAP, -1,
"cmp_swap", B_INSN,
_A4 (GCN_BTI_UINT, GCN_BTI_VOIDPTR, GCN_BTI_UINT, GCN_BTI_UINT),
gcn_expand_builtin_1)
DEF_BUILTIN (CMP_SWAPLL, -1,
"cmp_swapll", B_INSN,
_A4 (GCN_BTI_LLUINT,
GCN_BTI_VOIDPTR, GCN_BTI_LLUINT, GCN_BTI_LLUINT),
gcn_expand_builtin_1)
/* DEF_BUILTIN_BINOP_INT_FP creates many variants of a builtin function for a
given operation. The first argument will give base to the identifier of a
particular builtin, the second will be used to form the name of the patter
used to expand it to and the third will be used to create the user-visible
builtin identifier. */
DEF_BUILTIN_BINOP_INT_FP (ADD, add, "add")
DEF_BUILTIN_BINOP_INT_FP (SUB, sub, "sub")
DEF_BUILTIN_BINOP_INT_FP (AND, and, "and")
DEF_BUILTIN_BINOP_INT_FP (IOR, ior, "or")
DEF_BUILTIN_BINOP_INT_FP (XOR, xor, "xor")
/* OpenMP. */
DEF_BUILTIN (OMP_DIM_SIZE, CODE_FOR_oacc_dim_size,
"dim_size", B_INSN,
_A2 (GCN_BTI_INT, GCN_BTI_INT),
gcn_expand_builtin_1)
DEF_BUILTIN (OMP_DIM_POS, CODE_FOR_oacc_dim_pos,
"dim_pos", B_INSN,
_A2 (GCN_BTI_INT, GCN_BTI_INT),
gcn_expand_builtin_1)
/* OpenACC. */
DEF_BUILTIN (ACC_SINGLE_START, -1, "single_start", B_INSN, _A1 (GCN_BTI_BOOL),
gcn_expand_builtin_1)
DEF_BUILTIN (ACC_SINGLE_COPY_START, -1, "single_copy_start", B_INSN,
_A1 (GCN_BTI_LDS_VOIDPTR), gcn_expand_builtin_1)
DEF_BUILTIN (ACC_SINGLE_COPY_END, -1, "single_copy_end", B_INSN,
_A2 (GCN_BTI_VOID, GCN_BTI_LDS_VOIDPTR), gcn_expand_builtin_1)
DEF_BUILTIN (ACC_BARRIER, -1, "acc_barrier", B_INSN, _A1 (GCN_BTI_VOID),
gcn_expand_builtin_1)
#undef _A1
#undef _A2
#undef _A3
#undef _A4
#undef _A5
/* Copyright (C) 2016-2019 Free Software Foundation, Inc.
This file is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3 of the License, or (at your option)
any later version.
This file is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
#ifndef OBJECT_FORMAT_ELF
#error elf.h included before elfos.h
#endif
#define TEXT_SECTION_ASM_OP "\t.section\t.text"
#define BSS_SECTION_ASM_OP "\t.section\t.bss"
#define GLOBAL_ASM_OP "\t.globl\t"
#define DATA_SECTION_ASM_OP "\t.data\t"
#define SET_ASM_OP "\t.set\t"
#define LOCAL_LABEL_PREFIX "."
#define USER_LABEL_PREFIX ""
#define ASM_COMMENT_START ";"
#define TARGET_ASM_NAMED_SECTION default_elf_asm_named_section
#define ASM_OUTPUT_ALIGNED_BSS(FILE, DECL, NAME, SIZE, ALIGN) \
asm_output_aligned_bss (FILE, DECL, NAME, SIZE, ALIGN)
#undef ASM_DECLARE_FUNCTION_NAME
#define ASM_DECLARE_FUNCTION_NAME(FILE, NAME, DECL) \
gcn_hsa_declare_function_name ((FILE), (NAME), (DECL))
/* Unlike GNU as, the LLVM assembler uses log2 alignments. */
#undef ASM_OUTPUT_ALIGNED_COMMON
#define ASM_OUTPUT_ALIGNED_COMMON(FILE, NAME, SIZE, ALIGNMENT) \
(fprintf ((FILE), "%s", COMMON_ASM_OP), \
assemble_name ((FILE), (NAME)), \
fprintf ((FILE), "," HOST_WIDE_INT_PRINT_UNSIGNED ",%u\n", \
(SIZE) > 0 ? (SIZE) : 1, exact_log2 ((ALIGNMENT) / BITS_PER_UNIT)))
#define ASM_OUTPUT_LABEL(FILE,NAME) \
do { assemble_name (FILE, NAME); fputs (":\n", FILE); } while (0)
#define ASM_OUTPUT_LABELREF(FILE, NAME) \
asm_fprintf (FILE, "%U%s", default_strip_name_encoding (NAME))
extern unsigned int gcn_local_sym_hash (const char *name);
#define ASM_OUTPUT_SYMBOL_REF(FILE, X) gcn_asm_output_symbol_ref (FILE, X)
#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.word .L%d-.L%d\n", VALUE, REL)
#define ASM_OUTPUT_ADDR_VEC_ELT(FILE, VALUE) \
fprintf (FILE, "\t.word .L%d\n", VALUE)
#define ASM_OUTPUT_ALIGN(FILE,LOG) \
do { if (LOG!=0) fprintf (FILE, "\t.align\t%d\n", 1<<(LOG)); } while (0)
#define ASM_OUTPUT_ALIGN_WITH_NOP(FILE,LOG) \
do { \
if (LOG!=0) \
fprintf (FILE, "\t.p2alignl\t%d, 0xBF800000" \
" ; Fill value is 's_nop 0'\n", (LOG)); \
} while (0)
#define ASM_APP_ON ""
#define ASM_APP_OFF ""
/* Avoid the default in ../../gcc.c, which adds "-pthread", which is not
supported for gcn. */
#define GOMP_SELF_SPECS ""
/* Use LLVM assembler and linker options. */
#define ASM_SPEC "-triple=amdgcn--amdhsa " \
"%:last_arg(%{march=*:-mcpu=%*}) " \
"-filetype=obj"
/* Add -mlocal-symbol-id=<source-file-basename> unless the user (or mkoffload)
passes the option explicitly on the command line. The option also causes
several dump-matching tests to fail in the testsuite, so the option is not
added when or tree dump/compare-debug options used in the testsuite are
present.
This has the potential for surprise, but a user can still use an explicit
-mlocal-symbol-id=<whatever> option manually together with -fdump-tree or
-fcompare-debug options. */
#define CC1_SPEC "%{!mlocal-symbol-id=*:%{!fdump-tree-*:" \
"%{!fdump-ipa-*:%{!fcompare-debug*:-mlocal-symbol-id=%b}}}}"
#define LINK_SPEC "--pie"
#define LIB_SPEC "-lc"
/* Provides a _start symbol to keep the linker happy. */
#define STARTFILE_SPEC "crt0.o%s"
#define ENDFILE_SPEC ""
#define STANDARD_STARTFILE_PREFIX_2 ""
/* The LLVM assembler rejects multiple -mcpu options, so we must drop
all but the last. */
extern const char *last_arg_spec_function (int argc, const char **argv);
#define EXTRA_SPEC_FUNCTIONS \
{ "last_arg", last_arg_spec_function },
#undef LOCAL_INCLUDE_DIR
/* FIXME: Review debug info settings.
* In particular, EH_FRAME_THROUGH_COLLECT2 is probably the wrong
* thing but stuff fails to build without it.
* (Debug info is not a big deal until we get a debugger.) */
#define PREFERRED_DEBUGGING_TYPE DWARF2_DEBUG
#define DWARF2_DEBUGGING_INFO 1
#define DWARF2_ASM_LINE_DEBUG_INFO 1
#define EH_FRAME_THROUGH_COLLECT2 1
/* Copyright (C) 2016-2019 Free Software Foundation, Inc.
This file is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3 of the License, or (at your option)
any later version.
This file is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
/* Half-precision floating point */
FLOAT_MODE (HF, 2, 0);
/* FIXME: No idea what format it is. */
ADJUST_FLOAT_FORMAT (HF, &ieee_half_format);
/* Native vector modes. */
VECTOR_MODE (INT, QI, 64); /* V64QI */
VECTOR_MODE (INT, HI, 64); /* V64HI */
VECTOR_MODE (INT, SI, 64); /* V64SI */
VECTOR_MODE (INT, DI, 64); /* V64DI */
VECTOR_MODE (INT, TI, 64); /* V64TI */
VECTOR_MODE (FLOAT, HF, 64); /* V64HF */
VECTOR_MODE (FLOAT, SF, 64); /* V64SF */
VECTOR_MODE (FLOAT, DF, 64); /* V64DF */
/* Vector units handle reads independently and thus no large alignment
needed. */
ADJUST_ALIGNMENT (V64QI, 1);
ADJUST_ALIGNMENT (V64HI, 2);
ADJUST_ALIGNMENT (V64SI, 4);
ADJUST_ALIGNMENT (V64DI, 8);
ADJUST_ALIGNMENT (V64TI, 16);
ADJUST_ALIGNMENT (V64HF, 2);
ADJUST_ALIGNMENT (V64SF, 4);
ADJUST_ALIGNMENT (V64DF, 8);
/* Copyright (C) 2016-2019 Free Software Foundation, Inc.
This file is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3 of the License, or (at your option)
any later version.
This file is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
#ifndef GCN_OPTS_H
#define GCN_OPTS_H
/* Which processor to generate code or schedule for. */
enum processor_type
{
PROCESSOR_CARRIZO,
PROCESSOR_FIJI,
PROCESSOR_VEGA
};
/* Set in gcn_option_override. */
extern int gcn_isa;
#define TARGET_GCN3 (gcn_isa == 3)
#define TARGET_GCN3_PLUS (gcn_isa >= 3)
#define TARGET_GCN5 (gcn_isa == 5)
#define TARGET_GCN5_PLUS (gcn_isa >= 5)
#endif
/* Copyright (C) 2017-2019 Free Software Foundation, Inc.
This file is part of GCC.
GCC is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3, or (at your option) any later
version.
GCC is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
INSERT_PASS_AFTER (pass_omp_target_link, 1, pass_omp_gcn);
/* Copyright (C) 2016-2019 Free Software Foundation, Inc.
This file is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3 of the License, or (at your option)
any later version.
This file is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
#ifndef _GCN_PROTOS_
#define _GCN_PROTOS_
extern void gcn_asm_output_symbol_ref (FILE *file, rtx x);
extern tree gcn_builtin_decl (unsigned code, bool initialize_p);
extern bool gcn_can_split_p (machine_mode, rtx);
extern bool gcn_constant64_p (rtx);
extern bool gcn_constant_p (rtx);
extern rtx gcn_convert_mask_mode (rtx reg);
extern char * gcn_expand_dpp_shr_insn (machine_mode, const char *, int, int);
extern void gcn_expand_epilogue ();
extern rtx gcn_expand_scaled_offsets (addr_space_t as, rtx base, rtx offsets,
rtx scale, bool unsigned_p, rtx exec);
extern void gcn_expand_prologue ();
extern rtx gcn_expand_reduc_scalar (machine_mode, rtx, int);
extern rtx gcn_expand_scalar_to_vector_address (machine_mode, rtx, rtx, rtx);
extern void gcn_expand_vector_init (rtx, rtx);
extern bool gcn_flat_address_p (rtx, machine_mode);
extern bool gcn_fp_constant_p (rtx, bool);
extern rtx gcn_full_exec ();
extern rtx gcn_full_exec_reg ();
extern rtx gcn_gen_undef (machine_mode);
extern bool gcn_global_address_p (rtx);
extern tree gcn_goacc_adjust_propagation_record (tree record_type, bool sender,
const char *name);
extern void gcn_goacc_adjust_gangprivate_decl (tree var);
extern void gcn_goacc_reduction (gcall *call);
extern bool gcn_hard_regno_rename_ok (unsigned int from_reg,
unsigned int to_reg);
extern machine_mode gcn_hard_regno_caller_save_mode (unsigned int regno,
unsigned int nregs,
machine_mode regmode);
extern bool gcn_hard_regno_mode_ok (int regno, machine_mode mode);
extern int gcn_hard_regno_nregs (int regno, machine_mode mode);
extern void gcn_hsa_declare_function_name (FILE *file, const char *name,
tree decl);
extern HOST_WIDE_INT gcn_initial_elimination_offset (int, int);
extern bool gcn_inline_constant64_p (rtx);
extern bool gcn_inline_constant_p (rtx);
extern int gcn_inline_fp_constant_p (rtx, bool);
extern reg_class gcn_mode_code_base_reg_class (machine_mode, addr_space_t,
int, int);
extern rtx gcn_oacc_dim_pos (int dim);
extern rtx gcn_oacc_dim_size (int dim);
extern rtx gcn_operand_doublepart (machine_mode, rtx, int);
extern rtx gcn_operand_part (machine_mode, rtx, int);
extern bool gcn_regno_mode_code_ok_for_base_p (int, machine_mode,
addr_space_t, int, int);
extern reg_class gcn_regno_reg_class (int regno);
extern rtx gcn_scalar_exec ();
extern rtx gcn_scalar_exec_reg ();
extern bool gcn_scalar_flat_address_p (rtx);
extern bool gcn_scalar_flat_mem_p (rtx);
extern bool gcn_sgpr_move_p (rtx, rtx);
extern bool gcn_valid_move_p (machine_mode, rtx, rtx);
extern rtx gcn_vec_constant (machine_mode, int);
extern rtx gcn_vec_constant (machine_mode, rtx);
extern bool gcn_vgpr_move_p (rtx, rtx);
extern void print_operand_address (FILE *file, register rtx addr);
extern void print_operand (FILE *file, rtx x, int code);
extern bool regno_ok_for_index_p (int);
enum gcn_cvt_t
{
fix_trunc_cvt,
fixuns_trunc_cvt,
float_cvt,
floatuns_cvt,
extend_cvt,
trunc_cvt
};
extern bool gcn_valid_cvt_p (machine_mode from, machine_mode to,
enum gcn_cvt_t op);
#ifdef TREE_CODE
extern void gcn_init_cumulative_args (CUMULATIVE_ARGS *, tree, rtx, tree,
int);
class gimple_opt_pass;
extern gimple_opt_pass *make_pass_omp_gcn (gcc::context *ctxt);
#endif
/* Return true if MODE is valid for 1 VGPR register. */
inline bool
vgpr_1reg_mode_p (machine_mode mode)
{
return (mode == SImode || mode == SFmode || mode == HImode || mode == QImode
|| mode == V64QImode || mode == V64HImode || mode == V64SImode
|| mode == V64HFmode || mode == V64SFmode || mode == BImode);
}
/* Return true if MODE is valid for 1 SGPR register. */
inline bool
sgpr_1reg_mode_p (machine_mode mode)
{
return (mode == SImode || mode == SFmode || mode == HImode
|| mode == QImode || mode == BImode);
}
/* Return true if MODE is valid for pair of VGPR registers. */
inline bool
vgpr_2reg_mode_p (machine_mode mode)
{
return (mode == DImode || mode == DFmode
|| mode == V64DImode || mode == V64DFmode);
}
/* Return true if MODE can be handled directly by VGPR operations. */
inline bool
vgpr_vector_mode_p (machine_mode mode)
{
return (mode == V64QImode || mode == V64HImode
|| mode == V64SImode || mode == V64DImode
|| mode == V64HFmode || mode == V64SFmode || mode == V64DFmode);
}
/* Return true if MODE is valid for pair of SGPR registers. */
inline bool
sgpr_2reg_mode_p (machine_mode mode)
{
return mode == DImode || mode == DFmode;
}
#endif
/* Run a stand-alone AMD GCN kernel.
Copyright 2017 Mentor Graphics Corporation
Copyright 2018-2019 Free Software Foundation, Inc.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>. */
/* This program will run a compiled stand-alone GCN kernel on a GPU.
The kernel entry point's signature must use a standard main signature:
int main(int argc, char **argv)
*/
#include <stdint.h>
#include <stdbool.h>
#include <stdlib.h>
#include <malloc.h>
#include <stdio.h>
#include <string.h>
#include <dlfcn.h>
#include <unistd.h>
#include <elf.h>
#include <signal.h>
/* These probably won't be in elf.h for a while. */
#ifndef R_AMDGPU_NONE
#define R_AMDGPU_NONE 0
#define R_AMDGPU_ABS32_LO 1 /* (S + A) & 0xFFFFFFFF */
#define R_AMDGPU_ABS32_HI 2 /* (S + A) >> 32 */
#define R_AMDGPU_ABS64 3 /* S + A */
#define R_AMDGPU_REL32 4 /* S + A - P */
#define R_AMDGPU_REL64 5 /* S + A - P */
#define R_AMDGPU_ABS32 6 /* S + A */
#define R_AMDGPU_GOTPCREL 7 /* G + GOT + A - P */
#define R_AMDGPU_GOTPCREL32_LO 8 /* (G + GOT + A - P) & 0xFFFFFFFF */
#define R_AMDGPU_GOTPCREL32_HI 9 /* (G + GOT + A - P) >> 32 */
#define R_AMDGPU_REL32_LO 10 /* (S + A - P) & 0xFFFFFFFF */
#define R_AMDGPU_REL32_HI 11 /* (S + A - P) >> 32 */
#define reserved 12
#define R_AMDGPU_RELATIVE64 13 /* B + A */
#endif
#include "hsa.h"
#ifndef HSA_RUNTIME_LIB
#define HSA_RUNTIME_LIB "libhsa-runtime64.so"
#endif
#ifndef VERSION_STRING
#define VERSION_STRING "(version unknown)"
#endif
bool debug = false;
hsa_agent_t device = { 0 };
hsa_queue_t *queue = NULL;
uint64_t kernel = 0;
hsa_executable_t executable = { 0 };
hsa_region_t kernargs_region = { 0 };
uint32_t kernarg_segment_size = 0;
uint32_t group_segment_size = 0;
uint32_t private_segment_size = 0;
static void
usage (const char *progname)
{
printf ("Usage: %s [options] kernel [kernel-args]\n\n"
"Options:\n"
" --help\n"
" --version\n"
" --debug\n", progname);
}
static void
version (const char *progname)
{
printf ("%s " VERSION_STRING "\n", progname);
}
/* As an HSA runtime is dlopened, following structure defines the necessary
function pointers.
Code adapted from libgomp. */
struct hsa_runtime_fn_info
{
/* HSA runtime. */
hsa_status_t (*hsa_status_string_fn) (hsa_status_t status,
const char **status_string);
hsa_status_t (*hsa_agent_get_info_fn) (hsa_agent_t agent,
hsa_agent_info_t attribute,
void *value);
hsa_status_t (*hsa_init_fn) (void);
hsa_status_t (*hsa_iterate_agents_fn)
(hsa_status_t (*callback) (hsa_agent_t agent, void *data), void *data);
hsa_status_t (*hsa_region_get_info_fn) (hsa_region_t region,
hsa_region_info_t attribute,
void *value);
hsa_status_t (*hsa_queue_create_fn)
(hsa_agent_t agent, uint32_t size, hsa_queue_type_t type,
void (*callback) (hsa_status_t status, hsa_queue_t *source, void *data),
void *data, uint32_t private_segment_size,
uint32_t group_segment_size, hsa_queue_t **queue);
hsa_status_t (*hsa_agent_iterate_regions_fn)
(hsa_agent_t agent,
hsa_status_t (*callback) (hsa_region_t region, void *data), void *data);
hsa_status_t (*hsa_executable_destroy_fn) (hsa_executable_t executable);
hsa_status_t (*hsa_executable_create_fn)
(hsa_profile_t profile, hsa_executable_state_t executable_state,
const char *options, hsa_executable_t *executable);
hsa_status_t (*hsa_executable_global_variable_define_fn)
(hsa_executable_t executable, const char *variable_name, void *address);
hsa_status_t (*hsa_executable_load_code_object_fn)
(hsa_executable_t executable, hsa_agent_t agent,
hsa_code_object_t code_object, const char *options);
hsa_status_t (*hsa_executable_freeze_fn) (hsa_executable_t executable,
const char *options);
hsa_status_t (*hsa_signal_create_fn) (hsa_signal_value_t initial_value,
uint32_t num_consumers,
const hsa_agent_t *consumers,
hsa_signal_t *signal);
hsa_status_t (*hsa_memory_allocate_fn) (hsa_region_t region, size_t size,
void **ptr);
hsa_status_t (*hsa_memory_copy_fn) (void *dst, const void *src,
size_t size);
hsa_status_t (*hsa_memory_free_fn) (void *ptr);
hsa_status_t (*hsa_signal_destroy_fn) (hsa_signal_t signal);
hsa_status_t (*hsa_executable_get_symbol_fn)
(hsa_executable_t executable, const char *module_name,
const char *symbol_name, hsa_agent_t agent, int32_t call_convention,
hsa_executable_symbol_t *symbol);
hsa_status_t (*hsa_executable_symbol_get_info_fn)
(hsa_executable_symbol_t executable_symbol,
hsa_executable_symbol_info_t attribute, void *value);
void (*hsa_signal_store_relaxed_fn) (hsa_signal_t signal,
hsa_signal_value_t value);
hsa_signal_value_t (*hsa_signal_wait_acquire_fn)
(hsa_signal_t signal, hsa_signal_condition_t condition,
hsa_signal_value_t compare_value, uint64_t timeout_hint,
hsa_wait_state_t wait_state_hint);
hsa_signal_value_t (*hsa_signal_wait_relaxed_fn)
(hsa_signal_t signal, hsa_signal_condition_t condition,
hsa_signal_value_t compare_value, uint64_t timeout_hint,
hsa_wait_state_t wait_state_hint);
hsa_status_t (*hsa_queue_destroy_fn) (hsa_queue_t *queue);
hsa_status_t (*hsa_code_object_deserialize_fn)
(void *serialized_code_object, size_t serialized_code_object_size,
const char *options, hsa_code_object_t *code_object);
uint64_t (*hsa_queue_load_write_index_relaxed_fn)
(const hsa_queue_t *queue);
void (*hsa_queue_store_write_index_relaxed_fn)
(const hsa_queue_t *queue, uint64_t value);
hsa_status_t (*hsa_shut_down_fn) ();
};
/* HSA runtime functions that are initialized in init_hsa_context.
Code adapted from libgomp. */
static struct hsa_runtime_fn_info hsa_fns;
#define DLSYM_FN(function) \
*(void**)(&hsa_fns.function##_fn) = dlsym (handle, #function); \
if (hsa_fns.function##_fn == NULL) \
goto fail;
static void
init_hsa_runtime_functions (void)
{
void *handle = dlopen (HSA_RUNTIME_LIB, RTLD_LAZY);
if (handle == NULL)
{
fprintf (stderr,
"The HSA runtime is required to run GCN kernels on hardware.\n"
"%s: File not found or could not be opened\n",
HSA_RUNTIME_LIB);
exit (1);
}
DLSYM_FN (hsa_status_string)
DLSYM_FN (hsa_agent_get_info)
DLSYM_FN (hsa_init)
DLSYM_FN (hsa_iterate_agents)
DLSYM_FN (hsa_region_get_info)
DLSYM_FN (hsa_queue_create)
DLSYM_FN (hsa_agent_iterate_regions)
DLSYM_FN (hsa_executable_destroy)
DLSYM_FN (hsa_executable_create)
DLSYM_FN (hsa_executable_global_variable_define)
DLSYM_FN (hsa_executable_load_code_object)
DLSYM_FN (hsa_executable_freeze)
DLSYM_FN (hsa_signal_create)
DLSYM_FN (hsa_memory_allocate)
DLSYM_FN (hsa_memory_copy)
DLSYM_FN (hsa_memory_free)
DLSYM_FN (hsa_signal_destroy)
DLSYM_FN (hsa_executable_get_symbol)
DLSYM_FN (hsa_executable_symbol_get_info)
DLSYM_FN (hsa_signal_wait_acquire)
DLSYM_FN (hsa_signal_wait_relaxed)
DLSYM_FN (hsa_signal_store_relaxed)
DLSYM_FN (hsa_queue_destroy)
DLSYM_FN (hsa_code_object_deserialize)
DLSYM_FN (hsa_queue_load_write_index_relaxed)
DLSYM_FN (hsa_queue_store_write_index_relaxed)
DLSYM_FN (hsa_shut_down)
return;
fail:
fprintf (stderr, "Failed to find HSA functions in " HSA_RUNTIME_LIB "\n");
exit (1);
}
#undef DLSYM_FN
/* Report a fatal error STR together with the HSA error corresponding to
STATUS and terminate execution of the current process. */
static void
hsa_fatal (const char *str, hsa_status_t status)
{
const char *hsa_error_msg;
hsa_fns.hsa_status_string_fn (status, &hsa_error_msg);
fprintf (stderr, "%s: FAILED\nHSA Runtime message: %s\n", str,
hsa_error_msg);
exit (1);
}
/* Helper macros to ensure we check the return values from the HSA Runtime.
These just keep the rest of the code a bit cleaner. */
#define XHSA_CMP(FN, CMP, MSG) \
do { \
hsa_status_t status = (FN); \
if (!(CMP)) \
hsa_fatal ((MSG), status); \
else if (debug) \
fprintf (stderr, "%s: OK\n", (MSG)); \
} while (0)
#define XHSA(FN, MSG) XHSA_CMP(FN, status == HSA_STATUS_SUCCESS, MSG)
/* Callback of hsa_iterate_agents.
Called once for each available device, and returns "break" when a
suitable one has been found. */
static hsa_status_t
get_gpu_agent (hsa_agent_t agent, void *data __attribute__ ((unused)))
{
hsa_device_type_t device_type;
XHSA (hsa_fns.hsa_agent_get_info_fn (agent, HSA_AGENT_INFO_DEVICE,
&device_type),
"Get agent type");
/* Select only GPU devices. */
/* TODO: support selecting from multiple GPUs. */
if (HSA_DEVICE_TYPE_GPU == device_type)
{
device = agent;
return HSA_STATUS_INFO_BREAK;
}
/* The device was not suitable. */
return HSA_STATUS_SUCCESS;
}
/* Callback of hsa_iterate_regions.
Called once for each available memory region, and returns "break" when a
suitable one has been found. */
static hsa_status_t
get_kernarg_region (hsa_region_t region, void *data __attribute__ ((unused)))
{
/* Reject non-global regions. */
hsa_region_segment_t segment;
hsa_fns.hsa_region_get_info_fn (region, HSA_REGION_INFO_SEGMENT, &segment);
if (HSA_REGION_SEGMENT_GLOBAL != segment)
return HSA_STATUS_SUCCESS;
/* Find a region with the KERNARG flag set. */
hsa_region_global_flag_t flags;
hsa_fns.hsa_region_get_info_fn (region, HSA_REGION_INFO_GLOBAL_FLAGS,
&flags);
if (flags & HSA_REGION_GLOBAL_FLAG_KERNARG)
{
kernargs_region = region;
return HSA_STATUS_INFO_BREAK;
}
/* The region was not suitable. */
return HSA_STATUS_SUCCESS;
}
/* Initialize the HSA Runtime library and GPU device. */
static void
init_device ()
{
/* Load the shared library and find the API functions. */
init_hsa_runtime_functions ();
/* Initialize the HSA Runtime. */
XHSA (hsa_fns.hsa_init_fn (),
"Initialize run-time");
/* Select a suitable device.
The call-back function, get_gpu_agent, does the selection. */
XHSA_CMP (hsa_fns.hsa_iterate_agents_fn (get_gpu_agent, NULL),
status == HSA_STATUS_SUCCESS || status == HSA_STATUS_INFO_BREAK,
"Find a device");
/* Initialize the queue used for launching kernels. */
uint32_t queue_size = 0;
XHSA (hsa_fns.hsa_agent_get_info_fn (device, HSA_AGENT_INFO_QUEUE_MAX_SIZE,
&queue_size),
"Find max queue size");
XHSA (hsa_fns.hsa_queue_create_fn (device, queue_size,
HSA_QUEUE_TYPE_SINGLE, NULL,
NULL, UINT32_MAX, UINT32_MAX, &queue),
"Set up a device queue");
/* Select a memory region for the kernel arguments.
The call-back function, get_kernarg_region, does the selection. */
XHSA_CMP (hsa_fns.hsa_agent_iterate_regions_fn (device, get_kernarg_region,
NULL),
status == HSA_STATUS_SUCCESS || status == HSA_STATUS_INFO_BREAK,
"Locate kernargs memory");
}
/* Read a whole input file.
Code copied from mkoffload. */
static char *
read_file (const char *filename, size_t *plen)
{
size_t alloc = 16384;
size_t base = 0;
char *buffer;
FILE *stream = fopen (filename, "rb");
if (!stream)
{
perror (filename);
exit (1);
}
if (!fseek (stream, 0, SEEK_END))
{
/* Get the file size. */
long s = ftell (stream);
if (s >= 0)
alloc = s + 100;
fseek (stream, 0, SEEK_SET);
}
buffer = malloc (alloc);
for (;;)
{
size_t n = fread (buffer + base, 1, alloc - base - 1, stream);
if (!n)
break;
base += n;
if (base + 1 == alloc)
{
alloc *= 2;
buffer = realloc (buffer, alloc);
}
}
buffer[base] = 0;
*plen = base;
fclose (stream);
return buffer;
}
/* Read a HSA Code Object (HSACO) from file, and load it into the device. */
static void
load_image (const char *filename)
{
size_t image_size;
Elf64_Ehdr *image = (void *) read_file (filename, &image_size);
/* An "executable" consists of one or more code objects. */
XHSA (hsa_fns.hsa_executable_create_fn (HSA_PROFILE_FULL,
HSA_EXECUTABLE_STATE_UNFROZEN, "",
&executable),
"Initialize GCN executable");
/* Hide relocations from the HSA runtime loader.
Keep a copy of the unmodified section headers to use later. */
Elf64_Shdr *image_sections =
(Elf64_Shdr *) ((char *) image + image->e_shoff);
Elf64_Shdr *sections = malloc (sizeof (Elf64_Shdr) * image->e_shnum);
memcpy (sections, image_sections, sizeof (Elf64_Shdr) * image->e_shnum);
for (int i = image->e_shnum - 1; i >= 0; i--)
{
if (image_sections[i].sh_type == SHT_RELA
|| image_sections[i].sh_type == SHT_REL)
/* Change section type to something harmless. */
image_sections[i].sh_type = SHT_NOTE;
}
/* Add the HSACO to the executable. */
hsa_code_object_t co = { 0 };
XHSA (hsa_fns.hsa_code_object_deserialize_fn (image, image_size, NULL, &co),
"Deserialize GCN code object");
XHSA (hsa_fns.hsa_executable_load_code_object_fn (executable, device, co,
""),
"Load GCN code object");
/* We're done modifying he executable. */
XHSA (hsa_fns.hsa_executable_freeze_fn (executable, ""),
"Freeze GCN executable");
/* Locate the "main" function, and read the kernel's properties. */
hsa_executable_symbol_t symbol;
XHSA (hsa_fns.hsa_executable_get_symbol_fn (executable, NULL, "main",
device, 0, &symbol),
"Find 'main' function");
XHSA (hsa_fns.hsa_executable_symbol_get_info_fn
(symbol, HSA_EXECUTABLE_SYMBOL_INFO_KERNEL_OBJECT, &kernel),
"Extract kernel object");
XHSA (hsa_fns.hsa_executable_symbol_get_info_fn
(symbol, HSA_EXECUTABLE_SYMBOL_INFO_KERNEL_KERNARG_SEGMENT_SIZE,
&kernarg_segment_size),
"Extract kernarg segment size");
XHSA (hsa_fns.hsa_executable_symbol_get_info_fn
(symbol, HSA_EXECUTABLE_SYMBOL_INFO_KERNEL_GROUP_SEGMENT_SIZE,
&group_segment_size),
"Extract group segment size");
XHSA (hsa_fns.hsa_executable_symbol_get_info_fn
(symbol, HSA_EXECUTABLE_SYMBOL_INFO_KERNEL_PRIVATE_SEGMENT_SIZE,
&private_segment_size),
"Extract private segment size");
/* Find main function in ELF, and calculate actual load offset. */
Elf64_Addr load_offset;
XHSA (hsa_fns.hsa_executable_symbol_get_info_fn
(symbol, HSA_EXECUTABLE_SYMBOL_INFO_VARIABLE_ADDRESS,
&load_offset),
"Extract 'main' symbol address");
for (int i = 0; i < image->e_shnum; i++)
if (sections[i].sh_type == SHT_SYMTAB)
{
Elf64_Shdr *strtab = &sections[sections[i].sh_link];
char *strings = (char *) image + strtab->sh_offset;
for (size_t offset = 0;
offset < sections[i].sh_size;
offset += sections[i].sh_entsize)
{
Elf64_Sym *sym = (Elf64_Sym *) ((char *) image
+ sections[i].sh_offset + offset);
if (strcmp ("main", strings + sym->st_name) == 0)
{
load_offset -= sym->st_value;
goto found_main;
}
}
}
/* We only get here when main was not found.
This should never happen. */
fprintf (stderr, "Error: main function not found.\n");
abort ();
found_main:;
/* Find dynamic symbol table. */
Elf64_Shdr *dynsym = NULL;
for (int i = 0; i < image->e_shnum; i++)
if (sections[i].sh_type == SHT_DYNSYM)
{
dynsym = &sections[i];
break;
}
/* Fix up relocations. */
for (int i = 0; i < image->e_shnum; i++)
{
if (sections[i].sh_type == SHT_RELA)
for (size_t offset = 0;
offset < sections[i].sh_size;
offset += sections[i].sh_entsize)
{
Elf64_Rela *reloc = (Elf64_Rela *) ((char *) image
+ sections[i].sh_offset
+ offset);
Elf64_Sym *sym =
(dynsym
? (Elf64_Sym *) ((char *) image
+ dynsym->sh_offset
+ (dynsym->sh_entsize
* ELF64_R_SYM (reloc->r_info))) : NULL);
int64_t S = (sym ? sym->st_value : 0);
int64_t P = reloc->r_offset + load_offset;
int64_t A = reloc->r_addend;
int64_t B = load_offset;
int64_t V, size;
switch (ELF64_R_TYPE (reloc->r_info))
{
case R_AMDGPU_ABS32_LO:
V = (S + A) & 0xFFFFFFFF;
size = 4;
break;
case R_AMDGPU_ABS32_HI:
V = (S + A) >> 32;
size = 4;
break;
case R_AMDGPU_ABS64:
V = S + A;
size = 8;
break;
case R_AMDGPU_REL32:
V = S + A - P;
size = 4;
break;
case R_AMDGPU_REL64:
/* FIXME
LLD seems to emit REL64 where the the assembler has ABS64.
This is clearly wrong because it's not what the compiler
is expecting. Let's assume, for now, that it's a bug.
In any case, GCN kernels are always self contained and
therefore relative relocations will have been resolved
already, so this should be a safe workaround. */
V = S + A /* - P */ ;
size = 8;
break;
case R_AMDGPU_ABS32:
V = S + A;
size = 4;
break;
/* TODO R_AMDGPU_GOTPCREL */
/* TODO R_AMDGPU_GOTPCREL32_LO */
/* TODO R_AMDGPU_GOTPCREL32_HI */
case R_AMDGPU_REL32_LO:
V = (S + A - P) & 0xFFFFFFFF;
size = 4;
break;
case R_AMDGPU_REL32_HI:
V = (S + A - P) >> 32;
size = 4;
break;
case R_AMDGPU_RELATIVE64:
V = B + A;
size = 8;
break;
default:
fprintf (stderr, "Error: unsupported relocation type.\n");
exit (1);
}
XHSA (hsa_fns.hsa_memory_copy_fn ((void *) P, &V, size),
"Fix up relocation");
}
}
}
/* Allocate some device memory from the kernargs region.
The returned address will be 32-bit (with excess zeroed on 64-bit host),
and accessible via the same address on both host and target (via
__flat_scalar GCN address space). */
static void *
device_malloc (size_t size)
{
void *result;
XHSA (hsa_fns.hsa_memory_allocate_fn (kernargs_region, size, &result),
"Allocate device memory");
return result;
}
/* These are the device pointers that will be transferred to the target.
The HSA Runtime points the kernargs register here.
They correspond to function signature:
int main (int argc, char *argv[], int *return_value)
The compiler expects this, for kernel functions, and will
automatically assign the exit value to *return_value. */
struct kernargs
{
/* Kernargs. */
int32_t argc;
int64_t argv;
int64_t out_ptr;
int64_t heap_ptr;
/* Output data. */
struct output
{
int return_value;
int next_output;
struct printf_data
{
int written;
char msg[128];
int type;
union
{
int64_t ivalue;
double dvalue;
char text[128];
};
} queue[1000];
} output_data;
struct heap
{
int64_t size;
char data[0];
} heap;
};
/* Print any console output from the kernel.
We print all entries from print_index to the next entry without a "written"
flag. Subsequent calls should use the returned print_index value to resume
from the same point. */
void
gomp_print_output (struct kernargs *kernargs, int *print_index)
{
int limit = (sizeof (kernargs->output_data.queue)
/ sizeof (kernargs->output_data.queue[0]));
int i;
for (i = *print_index; i < limit; i++)
{
struct printf_data *data = &kernargs->output_data.queue[i];
if (!data->written)
break;
switch (data->type)
{
case 0:
printf ("%.128s%ld\n", data->msg, data->ivalue);
break;
case 1:
printf ("%.128s%f\n", data->msg, data->dvalue);
break;
case 2:
printf ("%.128s%.128s\n", data->msg, data->text);
break;
case 3:
printf ("%.128s%.128s", data->msg, data->text);
break;
}
data->written = 0;
}
if (*print_index < limit && i == limit
&& kernargs->output_data.next_output > limit)
printf ("WARNING: GCN print buffer exhausted.\n");
*print_index = i;
}
/* Execute an already-loaded kernel on the device. */
static void
run (void *kernargs)
{
/* A "signal" is used to launch and monitor the kernel. */
hsa_signal_t signal;
XHSA (hsa_fns.hsa_signal_create_fn (1, 0, NULL, &signal),
"Create signal");
/* Configure for a single-worker kernel. */
uint64_t index = hsa_fns.hsa_queue_load_write_index_relaxed_fn (queue);
const uint32_t queueMask = queue->size - 1;
hsa_kernel_dispatch_packet_t *dispatch_packet =
&(((hsa_kernel_dispatch_packet_t *) (queue->base_address))[index &
queueMask]);
dispatch_packet->setup |= 3 << HSA_KERNEL_DISPATCH_PACKET_SETUP_DIMENSIONS;
dispatch_packet->workgroup_size_x = (uint16_t) 1;
dispatch_packet->workgroup_size_y = (uint16_t) 64;
dispatch_packet->workgroup_size_z = (uint16_t) 1;
dispatch_packet->grid_size_x = 1;
dispatch_packet->grid_size_y = 64;
dispatch_packet->grid_size_z = 1;
dispatch_packet->completion_signal = signal;
dispatch_packet->kernel_object = kernel;
dispatch_packet->kernarg_address = (void *) kernargs;
dispatch_packet->private_segment_size = private_segment_size;
dispatch_packet->group_segment_size = group_segment_size;
uint16_t header = 0;
header |= HSA_FENCE_SCOPE_SYSTEM << HSA_PACKET_HEADER_ACQUIRE_FENCE_SCOPE;
header |= HSA_FENCE_SCOPE_SYSTEM << HSA_PACKET_HEADER_RELEASE_FENCE_SCOPE;
header |= HSA_PACKET_TYPE_KERNEL_DISPATCH << HSA_PACKET_HEADER_TYPE;
__atomic_store_n ((uint32_t *) dispatch_packet,
header | (dispatch_packet->setup << 16),
__ATOMIC_RELEASE);
if (debug)
fprintf (stderr, "Launch kernel\n");
hsa_fns.hsa_queue_store_write_index_relaxed_fn (queue, index + 1);
hsa_fns.hsa_signal_store_relaxed_fn (queue->doorbell_signal, index);
/* Kernel running ...... */
int print_index = 0;
while (hsa_fns.hsa_signal_wait_relaxed_fn (signal, HSA_SIGNAL_CONDITION_LT,
1, 1000000,
HSA_WAIT_STATE_ACTIVE) != 0)
{
usleep (10000);
gomp_print_output (kernargs, &print_index);
}
gomp_print_output (kernargs, &print_index);
if (debug)
fprintf (stderr, "Kernel exited\n");
XHSA (hsa_fns.hsa_signal_destroy_fn (signal),
"Clean up signal");
}
int
main (int argc, char *argv[])
{
int kernel_arg = 0;
for (int i = 1; i < argc; i++)
{
if (!strcmp (argv[i], "--help"))
{
usage (argv[0]);
return 0;
}
else if (!strcmp (argv[i], "--version"))
{
version (argv[0]);
return 0;
}
else if (!strcmp (argv[i], "--debug"))
debug = true;
else if (argv[i][0] == '-')
{
usage (argv[0]);
return 1;
}
else
{
kernel_arg = i;
break;
}
}
if (!kernel_arg)
{
/* No kernel arguments were found. */
usage (argv[0]);
return 1;
}
/* The remaining arguments are for the GCN kernel. */
int kernel_argc = argc - kernel_arg;
char **kernel_argv = &argv[kernel_arg];
init_device ();
load_image (kernel_argv[0]);
/* Calculate size of function parameters + argv data. */
size_t args_size = 0;
for (int i = 0; i < kernel_argc; i++)
args_size += strlen (kernel_argv[i]) + 1;
/* Allocate device memory for both function parameters and the argv
data. */
size_t heap_size = 10 * 1024 * 1024; /* 10MB. */
struct kernargs *kernargs = device_malloc (sizeof (*kernargs) + heap_size);
struct argdata
{
int64_t argv_data[kernel_argc];
char strings[args_size];
} *args = device_malloc (sizeof (struct argdata));
/* Write the data to the target. */
kernargs->argc = kernel_argc;
kernargs->argv = (int64_t) args->argv_data;
kernargs->out_ptr = (int64_t) &kernargs->output_data;
kernargs->output_data.return_value = 0xcafe0000; /* Default return value. */
kernargs->output_data.next_output = 0;
for (unsigned i = 0; i < (sizeof (kernargs->output_data.queue)
/ sizeof (kernargs->output_data.queue[0])); i++)
kernargs->output_data.queue[i].written = 0;
int offset = 0;
for (int i = 0; i < kernel_argc; i++)
{
size_t arg_len = strlen (kernel_argv[i]) + 1;
args->argv_data[i] = (int64_t) &args->strings[offset];
memcpy (&args->strings[offset], kernel_argv[i], arg_len + 1);
offset += arg_len;
}
kernargs->heap_ptr = (int64_t) &kernargs->heap;
kernargs->heap.size = heap_size;
/* Run the kernel on the GPU. */
run (kernargs);
unsigned int return_value =
(unsigned int) kernargs->output_data.return_value;
unsigned int upper = (return_value & ~0xffff) >> 16;
if (upper == 0xcafe)
printf ("Kernel exit value was never set\n");
else if (upper == 0xffff)
; /* Set by exit. */
else if (upper == 0)
; /* Set by return from main. */
else
printf ("Possible kernel exit value corruption, 2 most significant bytes "
"aren't 0xffff, 0xcafe, or 0: 0x%x\n", return_value);
if (upper == 0xffff)
{
unsigned int signal = (return_value >> 8) & 0xff;
if (signal == SIGABRT)
printf ("Kernel aborted\n");
else if (signal != 0)
printf ("Kernel received unkown signal\n");
}
if (debug)
printf ("Kernel exit value: %d\n", return_value & 0xff);
/* Clean shut down. */
XHSA (hsa_fns.hsa_memory_free_fn (kernargs),
"Clean up device memory");
XHSA (hsa_fns.hsa_executable_destroy_fn (executable),
"Clean up GCN executable");
XHSA (hsa_fns.hsa_queue_destroy_fn (queue),
"Clean up device queue");
XHSA (hsa_fns.hsa_shut_down_fn (),
"Shut down run-time");
return return_value & 0xff;
}
/* Copyright (C) 2017-2019 Free Software Foundation, Inc.
This file is part of GCC.
GCC is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3, or (at your option) any later
version.
GCC is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
/* {{{ Includes. */
#include "config.h"
#include "system.h"
#include "coretypes.h"
#include "backend.h"
#include "target.h"
#include "tree.h"
#include "gimple.h"
#include "tree-pass.h"
#include "gimple-iterator.h"
#include "cfghooks.h"
#include "cfgloop.h"
#include "tm_p.h"
#include "stringpool.h"
#include "fold-const.h"
#include "varasm.h"
#include "omp-low.h"
#include "omp-general.h"
#include "internal-fn.h"
#include "tree-vrp.h"
#include "tree-ssanames.h"
#include "tree-ssa-operands.h"
#include "gimplify.h"
#include "tree-phinodes.h"
#include "cgraph.h"
#include "targhooks.h"
#include "langhooks-def.h"
/* }}} */
/* {{{ OMP GCN pass.
This pass is intended to make any GCN-specfic transformations to OpenMP
target regions.
At present, its only purpose is to convert some "omp" built-in functions
to use closer-to-the-metal "gcn" built-in functions. */
unsigned int
execute_omp_gcn (void)
{
tree thr_num_tree = builtin_decl_explicit (BUILT_IN_OMP_GET_THREAD_NUM);
tree thr_num_id = DECL_NAME (thr_num_tree);
tree team_num_tree = builtin_decl_explicit (BUILT_IN_OMP_GET_TEAM_NUM);
tree team_num_id = DECL_NAME (team_num_tree);
basic_block bb;
gimple_stmt_iterator gsi;
unsigned int todo = 0;
FOR_EACH_BB_FN (bb, cfun)
for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
{
gimple *call = gsi_stmt (gsi);
tree decl;
if (is_gimple_call (call) && (decl = gimple_call_fndecl (call)))
{
tree decl_id = DECL_NAME (decl);
tree lhs = gimple_get_lhs (call);
if (decl_id == thr_num_id)
{
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file,
"Replace '%s' with __builtin_gcn_dim_pos.\n",
IDENTIFIER_POINTER (decl_id));
/* Transform this:
lhs = __builtin_omp_get_thread_num ()
to this:
lhs = __builtin_gcn_dim_pos (1) */
tree fn = targetm.builtin_decl (GCN_BUILTIN_OMP_DIM_POS, 0);
tree fnarg = build_int_cst (unsigned_type_node, 1);
gimple *stmt = gimple_build_call (fn, 1, fnarg);
gimple_call_set_lhs (stmt, lhs);
gsi_replace (&gsi, stmt, true);
todo |= TODO_update_ssa;
}
else if (decl_id == team_num_id)
{
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file,
"Replace '%s' with __builtin_gcn_dim_pos.\n",
IDENTIFIER_POINTER (decl_id));
/* Transform this:
lhs = __builtin_omp_get_team_num ()
to this:
lhs = __builtin_gcn_dim_pos (0) */
tree fn = targetm.builtin_decl (GCN_BUILTIN_OMP_DIM_POS, 0);
tree fnarg = build_zero_cst (unsigned_type_node);
gimple *stmt = gimple_build_call (fn, 1, fnarg);
gimple_call_set_lhs (stmt, lhs);
gsi_replace (&gsi, stmt, true);
todo |= TODO_update_ssa;
}
}
}
return todo;
}
namespace
{
const pass_data pass_data_omp_gcn = {
GIMPLE_PASS,
"omp_gcn", /* name */
OPTGROUP_NONE, /* optinfo_flags */
TV_NONE, /* tv_id */
0, /* properties_required */
0, /* properties_provided */
0, /* properties_destroyed */
0, /* todo_flags_start */
TODO_df_finish, /* todo_flags_finish */
};
class pass_omp_gcn : public gimple_opt_pass
{
public:
pass_omp_gcn (gcc::context *ctxt)
: gimple_opt_pass (pass_data_omp_gcn, ctxt)
{
}
/* opt_pass methods: */
virtual bool gate (function *)
{
return flag_openmp;
}
virtual unsigned int execute (function *)
{
return execute_omp_gcn ();
}
}; /* class pass_omp_gcn. */
} /* anon namespace. */
gimple_opt_pass *
make_pass_omp_gcn (gcc::context *ctxt)
{
return new pass_omp_gcn (ctxt);
}
/* }}} */
/* {{{ OpenACC reductions. */
/* Global lock variable, needed for 128bit worker & gang reductions. */
static GTY(()) tree global_lock_var;
/* Lazily generate the global_lock_var decl and return its address. */
static tree
gcn_global_lock_addr ()
{
tree v = global_lock_var;
if (!v)
{
tree name = get_identifier ("__reduction_lock");
tree type = build_qualified_type (unsigned_type_node,
TYPE_QUAL_VOLATILE);
v = build_decl (BUILTINS_LOCATION, VAR_DECL, name, type);
global_lock_var = v;
DECL_ARTIFICIAL (v) = 1;
DECL_EXTERNAL (v) = 1;
TREE_STATIC (v) = 1;
TREE_PUBLIC (v) = 1;
TREE_USED (v) = 1;
mark_addressable (v);
mark_decl_referenced (v);
}
return build_fold_addr_expr (v);
}
/* Helper function for gcn_reduction_update.
Insert code to locklessly update *PTR with *PTR OP VAR just before
GSI. We use a lockless scheme for nearly all case, which looks
like:
actual = initval (OP);
do {
guess = actual;
write = guess OP myval;
actual = cmp&swap (ptr, guess, write)
} while (actual bit-different-to guess);
return write;
This relies on a cmp&swap instruction, which is available for 32- and
64-bit types. Larger types must use a locking scheme. */
static tree
gcn_lockless_update (location_t loc, gimple_stmt_iterator *gsi,
tree ptr, tree var, tree_code op)
{
unsigned fn = GCN_BUILTIN_CMP_SWAP;
tree_code code = NOP_EXPR;
tree arg_type = unsigned_type_node;
tree var_type = TREE_TYPE (var);
if (TREE_CODE (var_type) == COMPLEX_TYPE
|| TREE_CODE (var_type) == REAL_TYPE)
code = VIEW_CONVERT_EXPR;
if (TYPE_SIZE (var_type) == TYPE_SIZE (long_long_unsigned_type_node))
{
arg_type = long_long_unsigned_type_node;
fn = GCN_BUILTIN_CMP_SWAPLL;
}
tree swap_fn = gcn_builtin_decl (fn, true);
gimple_seq init_seq = NULL;
tree init_var = make_ssa_name (arg_type);
tree init_expr = omp_reduction_init_op (loc, op, var_type);
init_expr = fold_build1 (code, arg_type, init_expr);
gimplify_assign (init_var, init_expr, &init_seq);
gimple *init_end = gimple_seq_last (init_seq);
gsi_insert_seq_before (gsi, init_seq, GSI_SAME_STMT);
/* Split the block just after the init stmts. */
basic_block pre_bb = gsi_bb (*gsi);
edge pre_edge = split_block (pre_bb, init_end);
basic_block loop_bb = pre_edge->dest;
pre_bb = pre_edge->src;
/* Reset the iterator. */
*gsi = gsi_for_stmt (gsi_stmt (*gsi));
tree expect_var = make_ssa_name (arg_type);
tree actual_var = make_ssa_name (arg_type);
tree write_var = make_ssa_name (arg_type);
/* Build and insert the reduction calculation. */
gimple_seq red_seq = NULL;
tree write_expr = fold_build1 (code, var_type, expect_var);
write_expr = fold_build2 (op, var_type, write_expr, var);
write_expr = fold_build1 (code, arg_type, write_expr);
gimplify_assign (write_var, write_expr, &red_seq);
gsi_insert_seq_before (gsi, red_seq, GSI_SAME_STMT);
/* Build & insert the cmp&swap sequence. */
gimple_seq latch_seq = NULL;
tree swap_expr = build_call_expr_loc (loc, swap_fn, 3,
ptr, expect_var, write_var);
gimplify_assign (actual_var, swap_expr, &latch_seq);
gcond *cond = gimple_build_cond (EQ_EXPR, actual_var, expect_var,
NULL_TREE, NULL_TREE);
gimple_seq_add_stmt (&latch_seq, cond);
gimple *latch_end = gimple_seq_last (latch_seq);
gsi_insert_seq_before (gsi, latch_seq, GSI_SAME_STMT);
/* Split the block just after the latch stmts. */
edge post_edge = split_block (loop_bb, latch_end);
basic_block post_bb = post_edge->dest;
loop_bb = post_edge->src;
*gsi = gsi_for_stmt (gsi_stmt (*gsi));
post_edge->flags ^= EDGE_TRUE_VALUE | EDGE_FALLTHRU;
/* post_edge->probability = profile_probability::even (); */
edge loop_edge = make_edge (loop_bb, loop_bb, EDGE_FALSE_VALUE);
/* loop_edge->probability = profile_probability::even (); */
set_immediate_dominator (CDI_DOMINATORS, loop_bb, pre_bb);
set_immediate_dominator (CDI_DOMINATORS, post_bb, loop_bb);
gphi *phi = create_phi_node (expect_var, loop_bb);
add_phi_arg (phi, init_var, pre_edge, loc);
add_phi_arg (phi, actual_var, loop_edge, loc);
loop *loop = alloc_loop ();
loop->header = loop_bb;
loop->latch = loop_bb;
add_loop (loop, loop_bb->loop_father);
return fold_build1 (code, var_type, write_var);
}
/* Helper function for gcn_reduction_update.
Insert code to lockfully update *PTR with *PTR OP VAR just before
GSI. This is necessary for types larger than 64 bits, where there
is no cmp&swap instruction to implement a lockless scheme. We use
a lock variable in global memory.
while (cmp&swap (&lock_var, 0, 1))
continue;
T accum = *ptr;
accum = accum OP var;
*ptr = accum;
cmp&swap (&lock_var, 1, 0);
return accum;
A lock in global memory is necessary to force execution engine
descheduling and avoid resource starvation that can occur if the
lock is in shared memory. */
static tree
gcn_lockfull_update (location_t loc, gimple_stmt_iterator *gsi,
tree ptr, tree var, tree_code op)
{
tree var_type = TREE_TYPE (var);
tree swap_fn = gcn_builtin_decl (GCN_BUILTIN_CMP_SWAP, true);
tree uns_unlocked = build_int_cst (unsigned_type_node, 0);
tree uns_locked = build_int_cst (unsigned_type_node, 1);
/* Split the block just before the gsi. Insert a gimple nop to make
this easier. */
gimple *nop = gimple_build_nop ();
gsi_insert_before (gsi, nop, GSI_SAME_STMT);
basic_block entry_bb = gsi_bb (*gsi);
edge entry_edge = split_block (entry_bb, nop);
basic_block lock_bb = entry_edge->dest;
/* Reset the iterator. */
*gsi = gsi_for_stmt (gsi_stmt (*gsi));
/* Build and insert the locking sequence. */
gimple_seq lock_seq = NULL;
tree lock_var = make_ssa_name (unsigned_type_node);
tree lock_expr = gcn_global_lock_addr ();
lock_expr = build_call_expr_loc (loc, swap_fn, 3, lock_expr,
uns_unlocked, uns_locked);
gimplify_assign (lock_var, lock_expr, &lock_seq);
gcond *cond = gimple_build_cond (EQ_EXPR, lock_var, uns_unlocked,
NULL_TREE, NULL_TREE);
gimple_seq_add_stmt (&lock_seq, cond);
gimple *lock_end = gimple_seq_last (lock_seq);
gsi_insert_seq_before (gsi, lock_seq, GSI_SAME_STMT);
/* Split the block just after the lock sequence. */
edge locked_edge = split_block (lock_bb, lock_end);
basic_block update_bb = locked_edge->dest;
lock_bb = locked_edge->src;
*gsi = gsi_for_stmt (gsi_stmt (*gsi));
/* Create the lock loop. */
locked_edge->flags ^= EDGE_TRUE_VALUE | EDGE_FALLTHRU;
locked_edge->probability = profile_probability::even ();
edge loop_edge = make_edge (lock_bb, lock_bb, EDGE_FALSE_VALUE);
loop_edge->probability = profile_probability::even ();
set_immediate_dominator (CDI_DOMINATORS, lock_bb, entry_bb);
set_immediate_dominator (CDI_DOMINATORS, update_bb, lock_bb);
/* Create the loop structure. */
loop *lock_loop = alloc_loop ();
lock_loop->header = lock_bb;
lock_loop->latch = lock_bb;
lock_loop->nb_iterations_estimate = 1;
lock_loop->any_estimate = true;
add_loop (lock_loop, entry_bb->loop_father);
/* Build and insert the reduction calculation. */
gimple_seq red_seq = NULL;
tree acc_in = make_ssa_name (var_type);
tree ref_in = build_simple_mem_ref (ptr);
TREE_THIS_VOLATILE (ref_in) = 1;
gimplify_assign (acc_in, ref_in, &red_seq);
tree acc_out = make_ssa_name (var_type);
tree update_expr = fold_build2 (op, var_type, ref_in, var);
gimplify_assign (acc_out, update_expr, &red_seq);
tree ref_out = build_simple_mem_ref (ptr);
TREE_THIS_VOLATILE (ref_out) = 1;
gimplify_assign (ref_out, acc_out, &red_seq);
gsi_insert_seq_before (gsi, red_seq, GSI_SAME_STMT);
/* Build & insert the unlock sequence. */
gimple_seq unlock_seq = NULL;
tree unlock_expr = gcn_global_lock_addr ();
unlock_expr = build_call_expr_loc (loc, swap_fn, 3, unlock_expr,
uns_locked, uns_unlocked);
gimplify_and_add (unlock_expr, &unlock_seq);
gsi_insert_seq_before (gsi, unlock_seq, GSI_SAME_STMT);
return acc_out;
}
/* Emit a sequence to update a reduction accumulator at *PTR with the
value held in VAR using operator OP. Return the updated value.
TODO: optimize for atomic ops and independent complex ops. */
static tree
gcn_reduction_update (location_t loc, gimple_stmt_iterator *gsi,
tree ptr, tree var, tree_code op)
{
tree type = TREE_TYPE (var);
tree size = TYPE_SIZE (type);
if (size == TYPE_SIZE (unsigned_type_node)
|| size == TYPE_SIZE (long_long_unsigned_type_node))
return gcn_lockless_update (loc, gsi, ptr, var, op);
else
return gcn_lockfull_update (loc, gsi, ptr, var, op);
}
/* Return a temporary variable decl to use for an OpenACC worker reduction. */
static tree
gcn_goacc_get_worker_red_decl (tree type, unsigned offset)
{
machine_function *machfun = cfun->machine;
tree existing_decl;
if (TREE_CODE (type) == REFERENCE_TYPE)
type = TREE_TYPE (type);
tree var_type
= build_qualified_type (type,
(TYPE_QUALS (type)
| ENCODE_QUAL_ADDR_SPACE (ADDR_SPACE_LDS)));
if (machfun->reduc_decls
&& offset < machfun->reduc_decls->length ()
&& (existing_decl = (*machfun->reduc_decls)[offset]))
{
gcc_assert (TREE_TYPE (existing_decl) == var_type);
return existing_decl;
}
else
{
char name[50];
sprintf (name, ".oacc_reduction_%u", offset);
tree decl = create_tmp_var_raw (var_type, name);
DECL_CONTEXT (decl) = NULL_TREE;
TREE_STATIC (decl) = 1;
varpool_node::finalize_decl (decl);
vec_safe_grow_cleared (machfun->reduc_decls, offset + 1);
(*machfun->reduc_decls)[offset] = decl;
return decl;
}
return NULL_TREE;
}
/* Expand IFN_GOACC_REDUCTION_SETUP. */
static void
gcn_goacc_reduction_setup (gcall *call)
{
gimple_stmt_iterator gsi = gsi_for_stmt (call);
tree lhs = gimple_call_lhs (call);
tree var = gimple_call_arg (call, 2);
int level = TREE_INT_CST_LOW (gimple_call_arg (call, 3));
gimple_seq seq = NULL;
push_gimplify_context (true);
if (level != GOMP_DIM_GANG)
{
/* Copy the receiver object. */
tree ref_to_res = gimple_call_arg (call, 1);
if (!integer_zerop (ref_to_res))
var = build_simple_mem_ref (ref_to_res);
}
if (level == GOMP_DIM_WORKER)
{
tree var_type = TREE_TYPE (var);
/* Store incoming value to worker reduction buffer. */
tree offset = gimple_call_arg (call, 5);
tree decl
= gcn_goacc_get_worker_red_decl (var_type, TREE_INT_CST_LOW (offset));
gimplify_assign (decl, var, &seq);
}
if (lhs)
gimplify_assign (lhs, var, &seq);
pop_gimplify_context (NULL);
gsi_replace_with_seq (&gsi, seq, true);
}
/* Expand IFN_GOACC_REDUCTION_INIT. */
static void
gcn_goacc_reduction_init (gcall *call)
{
gimple_stmt_iterator gsi = gsi_for_stmt (call);
tree lhs = gimple_call_lhs (call);
tree var = gimple_call_arg (call, 2);
int level = TREE_INT_CST_LOW (gimple_call_arg (call, 3));
enum tree_code rcode
= (enum tree_code) TREE_INT_CST_LOW (gimple_call_arg (call, 4));
tree init = omp_reduction_init_op (gimple_location (call), rcode,
TREE_TYPE (var));
gimple_seq seq = NULL;
push_gimplify_context (true);
if (level == GOMP_DIM_GANG)
{
/* If there's no receiver object, propagate the incoming VAR. */
tree ref_to_res = gimple_call_arg (call, 1);
if (integer_zerop (ref_to_res))
init = var;
}
if (lhs)
gimplify_assign (lhs, init, &seq);
pop_gimplify_context (NULL);
gsi_replace_with_seq (&gsi, seq, true);
}
/* Expand IFN_GOACC_REDUCTION_FINI. */
static void
gcn_goacc_reduction_fini (gcall *call)
{
gimple_stmt_iterator gsi = gsi_for_stmt (call);
tree lhs = gimple_call_lhs (call);
tree ref_to_res = gimple_call_arg (call, 1);
tree var = gimple_call_arg (call, 2);
int level = TREE_INT_CST_LOW (gimple_call_arg (call, 3));
enum tree_code op
= (enum tree_code) TREE_INT_CST_LOW (gimple_call_arg (call, 4));
gimple_seq seq = NULL;
tree r = NULL_TREE;;
push_gimplify_context (true);
tree accum = NULL_TREE;
if (level == GOMP_DIM_WORKER)
{
tree var_type = TREE_TYPE (var);
tree offset = gimple_call_arg (call, 5);
tree decl
= gcn_goacc_get_worker_red_decl (var_type, TREE_INT_CST_LOW (offset));
accum = build_fold_addr_expr (decl);
}
else if (integer_zerop (ref_to_res))
r = var;
else
accum = ref_to_res;
if (accum)
{
/* UPDATE the accumulator. */
gsi_insert_seq_before (&gsi, seq, GSI_SAME_STMT);
seq = NULL;
r = gcn_reduction_update (gimple_location (call), &gsi, accum, var, op);
}
if (lhs)
gimplify_assign (lhs, r, &seq);
pop_gimplify_context (NULL);
gsi_replace_with_seq (&gsi, seq, true);
}
/* Expand IFN_GOACC_REDUCTION_TEARDOWN. */
static void
gcn_goacc_reduction_teardown (gcall *call)
{
gimple_stmt_iterator gsi = gsi_for_stmt (call);
tree lhs = gimple_call_lhs (call);
tree var = gimple_call_arg (call, 2);
int level = TREE_INT_CST_LOW (gimple_call_arg (call, 3));
gimple_seq seq = NULL;
push_gimplify_context (true);
if (level == GOMP_DIM_WORKER)
{
tree var_type = TREE_TYPE (var);
/* Read the worker reduction buffer. */
tree offset = gimple_call_arg (call, 5);
tree decl
= gcn_goacc_get_worker_red_decl (var_type, TREE_INT_CST_LOW (offset));
var = decl;
}
if (level != GOMP_DIM_GANG)
{
/* Write to the receiver object. */
tree ref_to_res = gimple_call_arg (call, 1);
if (!integer_zerop (ref_to_res))
gimplify_assign (build_simple_mem_ref (ref_to_res), var, &seq);
}
if (lhs)
gimplify_assign (lhs, var, &seq);
pop_gimplify_context (NULL);
gsi_replace_with_seq (&gsi, seq, true);
}
/* Implement TARGET_GOACC_REDUCTION.
Expand calls to the GOACC REDUCTION internal function, into a sequence of
gimple instructions. */
void
gcn_goacc_reduction (gcall *call)
{
int level = TREE_INT_CST_LOW (gimple_call_arg (call, 3));
if (level == GOMP_DIM_VECTOR)
{
default_goacc_reduction (call);
return;
}
unsigned code = (unsigned) TREE_INT_CST_LOW (gimple_call_arg (call, 0));
switch (code)
{
case IFN_GOACC_REDUCTION_SETUP:
gcn_goacc_reduction_setup (call);
break;
case IFN_GOACC_REDUCTION_INIT:
gcn_goacc_reduction_init (call);
break;
case IFN_GOACC_REDUCTION_FINI:
gcn_goacc_reduction_fini (call);
break;
case IFN_GOACC_REDUCTION_TEARDOWN:
gcn_goacc_reduction_teardown (call);
break;
default:
gcc_unreachable ();
}
}
/* Implement TARGET_GOACC_ADJUST_PROPAGATION_RECORD.
Tweak (worker) propagation record, e.g. to put it in shared memory. */
tree
gcn_goacc_adjust_propagation_record (tree record_type, bool sender,
const char *name)
{
tree type = record_type;
TYPE_ADDR_SPACE (type) = ADDR_SPACE_LDS;
if (!sender)
type = build_pointer_type (type);
tree decl = create_tmp_var_raw (type, name);
if (sender)
{
DECL_CONTEXT (decl) = NULL_TREE;
TREE_STATIC (decl) = 1;
}
if (sender)
varpool_node::finalize_decl (decl);
return decl;
}
void
gcn_goacc_adjust_gangprivate_decl (tree var)
{
tree type = TREE_TYPE (var);
tree lds_type = build_qualified_type (type,
TYPE_QUALS_NO_ADDR_SPACE (type)
| ENCODE_QUAL_ADDR_SPACE (ADDR_SPACE_LDS));
machine_function *machfun = cfun->machine;
TREE_TYPE (var) = lds_type;
TREE_STATIC (var) = 1;
/* We're making VAR static. We have to mangle the name to avoid collisions
between different local variables that share the same names. */
lhd_set_decl_assembler_name (var);
varpool_node::finalize_decl (var);
if (machfun)
machfun->use_flat_addressing = true;
}
/* }}} */
This source diff could not be displayed because it is too large. You can view the blob instead.
/* Copyright (C) 2016-2019 Free Software Foundation, Inc.
This file is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3 of the License, or (at your option)
any later version.
This file is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
#include "config/gcn/gcn-opts.h"
#define TARGET_CPU_CPP_BUILTINS() \
do \
{ \
builtin_define ("__AMDGCN__"); \
if (TARGET_GCN3) \
builtin_define ("__GCN3__"); \
else if (TARGET_GCN5) \
builtin_define ("__GCN5__"); \
} \
while(0)
/* Support for a compile-time default architecture and tuning.
The rules are:
--with-arch is ignored if -march is specified.
--with-tune is ignored if -mtune is specified. */
#define OPTION_DEFAULT_SPECS \
{"arch", "%{!march=*:-march=%(VALUE)}" }, \
{"tune", "%{!mtune=*:-mtune=%(VALUE)}" }
/* Default target_flags if no switches specified. */
#ifndef TARGET_DEFAULT
#define TARGET_DEFAULT 0
#endif
/* Storage Layout */
#define BITS_BIG_ENDIAN 0
#define BYTES_BIG_ENDIAN 0
#define WORDS_BIG_ENDIAN 0
#define BITS_PER_WORD 32
#define UNITS_PER_WORD (BITS_PER_WORD/BITS_PER_UNIT)
#define LIBGCC2_UNITS_PER_WORD 4
#define POINTER_SIZE 64
#define PARM_BOUNDARY 64
#define STACK_BOUNDARY 64
#define FUNCTION_BOUNDARY 32
#define BIGGEST_ALIGNMENT 64
#define EMPTY_FIELD_BOUNDARY 32
#define MAX_FIXED_MODE_SIZE 64
#define MAX_REGS_PER_ADDRESS 2
#define STACK_SIZE_MODE DImode
#define Pmode DImode
#define CASE_VECTOR_MODE DImode
#define FUNCTION_MODE QImode
#define DATA_ALIGNMENT(TYPE,ALIGN) ((ALIGN) > 128 ? (ALIGN) : 128)
#define LOCAL_ALIGNMENT(TYPE,ALIGN) ((ALIGN) > 64 ? (ALIGN) : 64)
#define STACK_SLOT_ALIGNMENT(TYPE,MODE,ALIGN) ((ALIGN) > 64 ? (ALIGN) : 64)
#define STRICT_ALIGNMENT 1
/* Type Layout: match what x86_64 does. */
#define INT_TYPE_SIZE 32
#define LONG_TYPE_SIZE 64
#define LONG_LONG_TYPE_SIZE 64
#define FLOAT_TYPE_SIZE 32
#define DOUBLE_TYPE_SIZE 64
#define LONG_DOUBLE_TYPE_SIZE 64
#define DEFAULT_SIGNED_CHAR 1
#define PCC_BITFIELD_TYPE_MATTERS 1
/* Frame Layout */
#define FRAME_GROWS_DOWNWARD 0
#define ARGS_GROW_DOWNWARD 1
#define STACK_POINTER_OFFSET 0
#define FIRST_PARM_OFFSET(FNDECL) 0
#define DYNAMIC_CHAIN_ADDRESS(FP) plus_constant (Pmode, (FP), -16)
#define INCOMING_RETURN_ADDR_RTX gen_rtx_REG (Pmode, LINK_REGNUM)
#define STACK_DYNAMIC_OFFSET(FNDECL) (-crtl->outgoing_args_size)
#define ACCUMULATE_OUTGOING_ARGS 1
#define RETURN_ADDR_RTX(COUNT,FRAMEADDR) \
((COUNT) == 0 ? get_hard_reg_initial_val (Pmode, LINK_REGNUM) : NULL_RTX)
/* Register Basics */
#define FIRST_SGPR_REG 0
#define SGPR_REGNO(N) ((N)+FIRST_SGPR_REG)
#define LAST_SGPR_REG 101
#define FLAT_SCRATCH_REG 102
#define FLAT_SCRATCH_LO_REG 102
#define FLAT_SCRATCH_HI_REG 103
#define XNACK_MASK_REG 104
#define XNACK_MASK_LO_REG 104
#define XNACK_MASK_HI_REG 105
#define VCC_LO_REG 106
#define VCC_HI_REG 107
#define VCCZ_REG 108
#define TBA_REG 109
#define TBA_LO_REG 109
#define TBA_HI_REG 110
#define TMA_REG 111
#define TMA_LO_REG 111
#define TMA_HI_REG 112
#define TTMP0_REG 113
#define TTMP11_REG 124
#define M0_REG 125
#define EXEC_REG 126
#define EXEC_LO_REG 126
#define EXEC_HI_REG 127
#define EXECZ_REG 128
#define SCC_REG 129
/* 132-159 are reserved to simplify masks. */
#define FIRST_VGPR_REG 160
#define VGPR_REGNO(N) ((N)+FIRST_VGPR_REG)
#define LAST_VGPR_REG 415
/* Frame Registers, and other registers */
#define HARD_FRAME_POINTER_REGNUM 14
#define STACK_POINTER_REGNUM 16
#define LINK_REGNUM 18
#define EXEC_SAVE_REG 20
#define CC_SAVE_REG 22
#define RETURN_VALUE_REG 24 /* Must be divisible by 4. */
#define STATIC_CHAIN_REGNUM 30
#define WORK_ITEM_ID_Z_REG 162
#define SOFT_ARG_REG 416
#define FRAME_POINTER_REGNUM 418
#define FIRST_PSEUDO_REGISTER 420
#define FIRST_PARM_REG 24
#define NUM_PARM_REGS 6
/* There is no arg pointer. Just choose random fixed register that does
not intefere with anything. */
#define ARG_POINTER_REGNUM SOFT_ARG_REG
#define HARD_FRAME_POINTER_IS_ARG_POINTER 0
#define HARD_FRAME_POINTER_IS_FRAME_POINTER 0
#define SGPR_OR_VGPR_REGNO_P(N) ((N)>=FIRST_VGPR_REG && (N) <= LAST_SGPR_REG)
#define SGPR_REGNO_P(N) ((N) <= LAST_SGPR_REG)
#define VGPR_REGNO_P(N) ((N)>=FIRST_VGPR_REG && (N) <= LAST_VGPR_REG)
#define SSRC_REGNO_P(N) ((N) <= SCC_REG && (N) != VCCZ_REG)
#define SDST_REGNO_P(N) ((N) <= EXEC_HI_REG && (N) != VCCZ_REG)
#define CC_REG_P(X) (REG_P (X) && CC_REGNO_P (REGNO (X)))
#define CC_REGNO_P(X) ((X) == SCC_REG || (X) == VCC_REG)
#define FUNCTION_ARG_REGNO_P(N) \
((N) >= FIRST_PARM_REG && (N) < (FIRST_PARM_REG + NUM_PARM_REGS))
#define FIXED_REGISTERS { \
/* Scalars. */ \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
/* fp sp lr. */ \
0, 0, 0, 0, 1, 1, 1, 1, 0, 0, \
/* exec_save, cc_save */ \
1, 1, 1, 1, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, \
/* Special regs and padding. */ \
/* flat xnack vcc tba tma ttmp */ \
1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
/* m0 exec scc */ \
1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, \
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
/* VGRPs */ \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
/* Other registers. */ \
1, 1, 1, 1 \
}
#define CALL_USED_REGISTERS { \
/* Scalars. */ \
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
1, 1, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, \
/* Special regs and padding. */ \
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
/* VGRPs */ \
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
/* Other registers. */ \
1, 1, 1, 1 \
}
#define HARD_REGNO_RENAME_OK(FROM, TO) \
gcn_hard_regno_rename_ok (FROM, TO)
#define HARD_REGNO_CALLER_SAVE_MODE(HARDREG, NREGS, MODE) \
gcn_hard_regno_caller_save_mode ((HARDREG), (NREGS), (MODE))
/* Register Classes */
enum reg_class
{
NO_REGS,
/* SCC */
SCC_CONDITIONAL_REG,
/* VCCZ */
VCCZ_CONDITIONAL_REG,
/* VCC */
VCC_CONDITIONAL_REG,
/* EXECZ */
EXECZ_CONDITIONAL_REG,
/* SCC VCCZ EXECZ */
ALL_CONDITIONAL_REGS,
/* EXEC */
EXEC_MASK_REG,
/* SGPR0-101 */
SGPR_REGS,
/* SGPR0-101 EXEC_LO/EXEC_HI */
SGPR_EXEC_REGS,
/* SGPR0-101, FLAT_SCRATCH_LO/HI, VCC LO/HI, TBA LO/HI, TMA LO/HI, TTMP0-11,
M0, VCCZ, SCC
(EXEC_LO/HI, EXECZ excluded to prevent compiler misuse.) */
SGPR_VOP_SRC_REGS,
/* SGPR0-101, FLAT_SCRATCH_LO/HI, XNACK_MASK_LO/HI, VCC LO/HI, TBA LO/HI
TMA LO/HI, TTMP0-11 */
SGPR_MEM_SRC_REGS,
/* SGPR0-101, FLAT_SCRATCH_LO/HI, XNACK_MASK_LO/HI, VCC LO/HI, TBA LO/HI
TMA LO/HI, TTMP0-11, M0, EXEC LO/HI */
SGPR_DST_REGS,
/* SGPR0-101, FLAT_SCRATCH_LO/HI, XNACK_MASK_LO/HI, VCC LO/HI, TBA LO/HI
TMA LO/HI, TTMP0-11 */
SGPR_SRC_REGS,
GENERAL_REGS,
VGPR_REGS,
ALL_GPR_REGS,
SRCDST_REGS,
AFP_REGS,
ALL_REGS,
LIM_REG_CLASSES
};
#define N_REG_CLASSES (int) LIM_REG_CLASSES
#define REG_CLASS_NAMES \
{ "NO_REGS", \
"SCC_CONDITIONAL_REG", \
"VCCZ_CONDITIONAL_REG", \
"VCC_CONDITIONAL_REG", \
"EXECZ_CONDITIONAL_REG", \
"ALL_CONDITIONAL_REGS", \
"EXEC_MASK_REG", \
"SGPR_REGS", \
"SGPR_EXEC_REGS", \
"SGPR_VOP3A_SRC_REGS", \
"SGPR_MEM_SRC_REGS", \
"SGPR_DST_REGS", \
"SGPR_SRC_REGS", \
"GENERAL_REGS", \
"VGPR_REGS", \
"ALL_GPR_REGS", \
"SRCDST_REGS", \
"AFP_REGS", \
"ALL_REGS" \
}
#define NAMED_REG_MASK(N) (1<<((N)-3*32))
#define NAMED_REG_MASK2(N) (1<<((N)-4*32))
#define REG_CLASS_CONTENTS { \
/* NO_REGS. */ \
{0, 0, 0, 0, \
0, 0, 0, 0, \
0, 0, 0, 0, 0, 0}, \
/* SCC_CONDITIONAL_REG. */ \
{0, 0, 0, 0, \
NAMED_REG_MASK2 (SCC_REG), 0, 0, 0, \
0, 0, 0, 0, 0}, \
/* VCCZ_CONDITIONAL_REG. */ \
{0, 0, 0, NAMED_REG_MASK (VCCZ_REG), \
0, 0, 0, 0, \
0, 0, 0, 0, 0, 0}, \
/* VCC_CONDITIONAL_REG. */ \
{0, 0, 0, NAMED_REG_MASK (VCC_LO_REG)|NAMED_REG_MASK (VCC_HI_REG), \
0, 0, 0, 0, \
0, 0, 0, 0, 0, 0}, \
/* EXECZ_CONDITIONAL_REG. */ \
{0, 0, 0, 0, \
NAMED_REG_MASK2 (EXECZ_REG), 0, 0, 0, \
0, 0, 0, 0, 0}, \
/* ALL_CONDITIONAL_REGS. */ \
{0, 0, 0, NAMED_REG_MASK (VCCZ_REG), \
NAMED_REG_MASK2 (EXECZ_REG) | NAMED_REG_MASK2 (SCC_REG), 0, 0, 0, \
0, 0, 0, 0, 0, 0}, \
/* EXEC_MASK_REG. */ \
{0, 0, 0, NAMED_REG_MASK (EXEC_LO_REG) | NAMED_REG_MASK (EXEC_HI_REG), \
0, 0, 0, 0, \
0, 0, 0, 0, 0, 0}, \
/* SGPR_REGS. */ \
{0xffffffff, 0xffffffff, 0xffffffff, 0xf1, \
0, 0, 0, 0, \
0, 0, 0, 0, 0, 0}, \
/* SGPR_EXEC_REGS. */ \
{0xffffffff, 0xffffffff, 0xffffffff, \
0xf1 | NAMED_REG_MASK (EXEC_LO_REG) | NAMED_REG_MASK (EXEC_HI_REG), \
0, 0, 0, 0, \
0, 0, 0, 0, 0, 0}, \
/* SGPR_VOP_SRC_REGS. */ \
{0xffffffff, 0xffffffff, 0xffffffff, \
0xffffffff \
-NAMED_REG_MASK (EXEC_LO_REG) \
-NAMED_REG_MASK (EXEC_HI_REG), \
NAMED_REG_MASK2 (SCC_REG), 0, 0, 0, \
0, 0, 0, 0, 0, 0}, \
/* SGPR_MEM_SRC_REGS. */ \
{0xffffffff, 0xffffffff, 0xffffffff, \
0xffffffff-NAMED_REG_MASK (VCCZ_REG)-NAMED_REG_MASK (M0_REG) \
-NAMED_REG_MASK (EXEC_LO_REG)-NAMED_REG_MASK (EXEC_HI_REG), \
0, 0, 0, 0, \
0, 0, 0, 0, 0, 0}, \
/* SGPR_DST_REGS. */ \
{0xffffffff, 0xffffffff, 0xffffffff, \
0xffffffff-NAMED_REG_MASK (VCCZ_REG), \
0, 0, 0, 0, \
0, 0, 0, 0, 0, 0}, \
/* SGPR_SRC_REGS. */ \
{0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, \
NAMED_REG_MASK2 (EXECZ_REG) | NAMED_REG_MASK2 (SCC_REG), 0, 0, 0, \
0, 0, 0, 0, 0, 0}, \
/* GENERAL_REGS. */ \
{0xffffffff, 0xffffffff, 0xffffffff, 0xf1, \
0, 0, 0, 0, \
0, 0, 0, 0, 0, 0}, \
/* VGPR_REGS. */ \
{0, 0, 0, 0, \
0, 0xffffffff, 0xffffffff, 0xffffffff, \
0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, 0}, \
/* ALL_GPR_REGS. */ \
{0xffffffff, 0xffffffff, 0xffffffff, 0xf1, \
0, 0xffffffff, 0xffffffff, 0xffffffff, \
0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, 0}, \
/* SRCDST_REGS. */ \
{0xffffffff, 0xffffffff, 0xffffffff, \
0xffffffff-NAMED_REG_MASK (VCCZ_REG), \
0, 0xffffffff, 0xffffffff, 0xffffffff, \
0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, 0}, \
/* AFP_REGS. */ \
{0, 0, 0, 0, \
0, 0, 0, 0, \
0, 0, 0, 0, 0, 0xf}, \
/* ALL_REGS. */ \
{0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, \
0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, \
0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, 0 }}
#define REGNO_REG_CLASS(REGNO) gcn_regno_reg_class (REGNO)
#define MODE_CODE_BASE_REG_CLASS(MODE, AS, OUTER, INDEX) \
gcn_mode_code_base_reg_class (MODE, AS, OUTER, INDEX)
#define REGNO_MODE_CODE_OK_FOR_BASE_P(NUM, MODE, AS, OUTER, INDEX) \
gcn_regno_mode_code_ok_for_base_p (NUM, MODE, AS, OUTER, INDEX)
#define INDEX_REG_CLASS VGPR_REGS
#define REGNO_OK_FOR_INDEX_P(regno) regno_ok_for_index_p (regno)
/* Address spaces. */
enum gcn_address_spaces
{
ADDR_SPACE_DEFAULT = 0,
ADDR_SPACE_FLAT,
ADDR_SPACE_SCALAR_FLAT,
ADDR_SPACE_FLAT_SCRATCH,
ADDR_SPACE_LDS,
ADDR_SPACE_GDS,
ADDR_SPACE_SCRATCH,
ADDR_SPACE_GLOBAL
};
#define REGISTER_TARGET_PRAGMAS() do { \
c_register_addr_space ("__flat", ADDR_SPACE_FLAT); \
c_register_addr_space ("__flat_scratch", ADDR_SPACE_FLAT_SCRATCH); \
c_register_addr_space ("__scalar_flat", ADDR_SPACE_SCALAR_FLAT); \
c_register_addr_space ("__lds", ADDR_SPACE_LDS); \
c_register_addr_space ("__gds", ADDR_SPACE_GDS); \
c_register_addr_space ("__global", ADDR_SPACE_GLOBAL); \
} while (0);
#define STACK_ADDR_SPACE \
(TARGET_GCN5_PLUS ? ADDR_SPACE_GLOBAL : ADDR_SPACE_FLAT)
#define DEFAULT_ADDR_SPACE \
((cfun && cfun->machine && !cfun->machine->use_flat_addressing) \
? ADDR_SPACE_GLOBAL : ADDR_SPACE_FLAT)
#define AS_SCALAR_FLAT_P(AS) ((AS) == ADDR_SPACE_SCALAR_FLAT)
#define AS_FLAT_SCRATCH_P(AS) ((AS) == ADDR_SPACE_FLAT_SCRATCH)
#define AS_FLAT_P(AS) ((AS) == ADDR_SPACE_FLAT \
|| ((AS) == ADDR_SPACE_DEFAULT \
&& DEFAULT_ADDR_SPACE == ADDR_SPACE_FLAT))
#define AS_LDS_P(AS) ((AS) == ADDR_SPACE_LDS)
#define AS_GDS_P(AS) ((AS) == ADDR_SPACE_GDS)
#define AS_SCRATCH_P(AS) ((AS) == ADDR_SPACE_SCRATCH)
#define AS_GLOBAL_P(AS) ((AS) == ADDR_SPACE_GLOBAL \
|| ((AS) == ADDR_SPACE_DEFAULT \
&& DEFAULT_ADDR_SPACE == ADDR_SPACE_GLOBAL))
#define AS_ANY_FLAT_P(AS) (AS_FLAT_SCRATCH_P (AS) || AS_FLAT_P (AS))
#define AS_ANY_DS_P(AS) (AS_LDS_P (AS) || AS_GDS_P (AS))
/* Instruction Output */
#define REGISTER_NAMES \
{"s0", "s1", "s2", "s3", "s4", "s5", "s6", "s7", "s8", "s9", "s10", \
"s11", "s12", "s13", "s14", "s15", "s16", "s17", "s18", "s19", "s20", \
"s21", "s22", "s23", "s24", "s25", "s26", "s27", "s28", "s29", "s30", \
"s31", "s32", "s33", "s34", "s35", "s36", "s37", "s38", "s39", "s40", \
"s41", "s42", "s43", "s44", "s45", "s46", "s47", "s48", "s49", "s50", \
"s51", "s52", "s53", "s54", "s55", "s56", "s57", "s58", "s59", "s60", \
"s61", "s62", "s63", "s64", "s65", "s66", "s67", "s68", "s69", "s70", \
"s71", "s72", "s73", "s74", "s75", "s76", "s77", "s78", "s79", "s80", \
"s81", "s82", "s83", "s84", "s85", "s86", "s87", "s88", "s89", "s90", \
"s91", "s92", "s93", "s94", "s95", "s96", "s97", "s98", "s99", \
"s100", "s101", \
"flat_scratch_lo", "flat_scratch_hi", "xnack_mask_lo", "xnack_mask_hi", \
"vcc_lo", "vcc_hi", "vccz", "tba_lo", "tba_hi", "tma_lo", "tma_hi", \
"ttmp0", "ttmp1", "ttmp2", "ttmp3", "ttmp4", "ttmp5", "ttmp6", "ttmp7", \
"ttmp8", "ttmp9", "ttmp10", "ttmp11", "m0", "exec_lo", "exec_hi", \
"execz", "scc", \
"res130", "res131", "res132", "res133", "res134", "res135", "res136", \
"res137", "res138", "res139", "res140", "res141", "res142", "res143", \
"res144", "res145", "res146", "res147", "res148", "res149", "res150", \
"res151", "res152", "res153", "res154", "res155", "res156", "res157", \
"res158", "res159", \
"v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v8", "v9", "v10", \
"v11", "v12", "v13", "v14", "v15", "v16", "v17", "v18", "v19", "v20", \
"v21", "v22", "v23", "v24", "v25", "v26", "v27", "v28", "v29", "v30", \
"v31", "v32", "v33", "v34", "v35", "v36", "v37", "v38", "v39", "v40", \
"v41", "v42", "v43", "v44", "v45", "v46", "v47", "v48", "v49", "v50", \
"v51", "v52", "v53", "v54", "v55", "v56", "v57", "v58", "v59", "v60", \
"v61", "v62", "v63", "v64", "v65", "v66", "v67", "v68", "v69", "v70", \
"v71", "v72", "v73", "v74", "v75", "v76", "v77", "v78", "v79", "v80", \
"v81", "v82", "v83", "v84", "v85", "v86", "v87", "v88", "v89", "v90", \
"v91", "v92", "v93", "v94", "v95", "v96", "v97", "v98", "v99", "v100", \
"v101", "v102", "v103", "v104", "v105", "v106", "v107", "v108", "v109", \
"v110", "v111", "v112", "v113", "v114", "v115", "v116", "v117", "v118", \
"v119", "v120", "v121", "v122", "v123", "v124", "v125", "v126", "v127", \
"v128", "v129", "v130", "v131", "v132", "v133", "v134", "v135", "v136", \
"v137", "v138", "v139", "v140", "v141", "v142", "v143", "v144", "v145", \
"v146", "v147", "v148", "v149", "v150", "v151", "v152", "v153", "v154", \
"v155", "v156", "v157", "v158", "v159", "v160", "v161", "v162", "v163", \
"v164", "v165", "v166", "v167", "v168", "v169", "v170", "v171", "v172", \
"v173", "v174", "v175", "v176", "v177", "v178", "v179", "v180", "v181", \
"v182", "v183", "v184", "v185", "v186", "v187", "v188", "v189", "v190", \
"v191", "v192", "v193", "v194", "v195", "v196", "v197", "v198", "v199", \
"v200", "v201", "v202", "v203", "v204", "v205", "v206", "v207", "v208", \
"v209", "v210", "v211", "v212", "v213", "v214", "v215", "v216", "v217", \
"v218", "v219", "v220", "v221", "v222", "v223", "v224", "v225", "v226", \
"v227", "v228", "v229", "v230", "v231", "v232", "v233", "v234", "v235", \
"v236", "v237", "v238", "v239", "v240", "v241", "v242", "v243", "v244", \
"v245", "v246", "v247", "v248", "v249", "v250", "v251", "v252", "v253", \
"v254", "v255", \
"?ap0", "?ap1", "?fp0", "?fp1" }
#define PRINT_OPERAND(FILE, X, CODE) print_operand(FILE, X, CODE)
#define PRINT_OPERAND_ADDRESS(FILE, ADDR) print_operand_address (FILE, ADDR)
#define PRINT_OPERAND_PUNCT_VALID_P(CODE) (CODE == '^')
/* Register Arguments */
#ifndef USED_FOR_TARGET
#define GCN_KERNEL_ARG_TYPES 19
struct GTY(()) gcn_kernel_args
{
long requested;
int reg[GCN_KERNEL_ARG_TYPES];
int order[GCN_KERNEL_ARG_TYPES];
int nargs, nsgprs;
};
typedef struct gcn_args
{
/* True if this isn't a kernel (HSA runtime entrypoint). */
bool normal_function;
tree fntype;
struct gcn_kernel_args args;
int num;
int offset;
int alignment;
} CUMULATIVE_ARGS;
#endif
#define INIT_CUMULATIVE_ARGS(CUM,FNTYPE,LIBNAME,FNDECL,N_NAMED_ARGS) \
gcn_init_cumulative_args (&(CUM), (FNTYPE), (LIBNAME), (FNDECL), \
(N_NAMED_ARGS) != -1)
#ifndef USED_FOR_TARGET
#include "hash-table.h"
#include "hash-map.h"
#include "vec.h"
struct GTY(()) machine_function
{
struct gcn_kernel_args args;
int kernarg_segment_alignment;
int kernarg_segment_byte_size;
/* Frame layout info for normal functions. */
bool normal_function;
bool need_frame_pointer;
bool lr_needs_saving;
HOST_WIDE_INT outgoing_args_size;
HOST_WIDE_INT pretend_size;
HOST_WIDE_INT local_vars;
HOST_WIDE_INT callee_saves;
unsigned lds_allocated;
hash_map<tree, int> *lds_allocs;
vec<tree, va_gc> *reduc_decls;
bool use_flat_addressing;
};
#endif
/* Codes for all the GCN builtins. */
enum gcn_builtin_codes
{
#define DEF_BUILTIN(fcode, icode, name, type, params, expander) \
GCN_BUILTIN_ ## fcode,
#define DEF_BUILTIN_BINOP_INT_FP(fcode, ic, name) \
GCN_BUILTIN_ ## fcode ## _V64SI, \
GCN_BUILTIN_ ## fcode ## _V64SI_unspec,
#include "gcn-builtins.def"
#undef DEF_BUILTIN
#undef DEF_BUILTIN_BINOP_INT_FP
GCN_BUILTIN_MAX
};
/* Misc */
/* We can load/store 128-bit quantities, but having this larger than
MAX_FIXED_MODE_SIZE (which we want to be 64 bits) causes problems. */
#define MOVE_MAX 8
#define AVOID_CCMODE_COPIES 1
#define SLOW_BYTE_ACCESS 0
#define WORD_REGISTER_OPERATIONS 1
/* Definitions for register eliminations.
This is an array of structures. Each structure initializes one pair
of eliminable registers. The "from" register number is given first,
followed by "to". Eliminations of the same "from" register are listed
in order of preference. */
#define ELIMINABLE_REGS \
{{ ARG_POINTER_REGNUM, STACK_POINTER_REGNUM }, \
{ ARG_POINTER_REGNUM, HARD_FRAME_POINTER_REGNUM }, \
{ FRAME_POINTER_REGNUM, STACK_POINTER_REGNUM }, \
{ FRAME_POINTER_REGNUM, HARD_FRAME_POINTER_REGNUM }}
/* Define the offset between two registers, one to be eliminated, and the
other its replacement, at the start of a routine. */
#define INITIAL_ELIMINATION_OFFSET(FROM, TO, OFFSET) \
((OFFSET) = gcn_initial_elimination_offset ((FROM), (TO)))
/* Define this macro if it is advisable to hold scalars in registers
in a wider mode than that declared by the program. In such cases,
the value is constrained to be within the bounds of the declared
type, but kept valid in the wider mode. The signedness of the
extension may differ from that of the type. */
#define PROMOTE_MODE(MODE,UNSIGNEDP,TYPE) \
if (GET_MODE_CLASS (MODE) == MODE_INT \
&& (TYPE == NULL || TREE_CODE (TYPE) != VECTOR_TYPE) \
&& GET_MODE_SIZE (MODE) < UNITS_PER_WORD) \
{ \
(MODE) = SImode; \
}
/* This needs to match gcn_function_value. */
#define LIBCALL_VALUE(MODE) gen_rtx_REG (MODE, SGPR_REGNO (RETURN_VALUE_REG))
/* Costs. */
/* Branches are to be dicouraged when theres an alternative.
FIXME: This number is plucked from the air. */
#define BRANCH_COST(SPEED_P, PREDICABLE_P) 10
/* Profiling */
#define FUNCTION_PROFILER(FILE, LABELNO)
#define NO_PROFILE_COUNTERS 1
#define PROFILE_BEFORE_PROLOGUE 0
/* Trampolines */
#define TRAMPOLINE_SIZE 36
#define TRAMPOLINE_ALIGNMENT 64
; Options for the GCN port of the compiler.
; Copyright (C) 2016-2019 Free Software Foundation, Inc.
;
; This file is part of GCC.
;
; GCC is free software; you can redistribute it and/or modify it under
; the terms of the GNU General Public License as published by the Free
; Software Foundation; either version 3, or (at your option) any later
; version.
;
; GCC is distributed in the hope that it will be useful, but WITHOUT ANY
; WARRANTY; without even the implied warranty of MERCHANTABILITY or
; FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
; for more details.
;
; You should have received a copy of the GNU General Public License
; along with GCC; see the file COPYING3. If not see
; <http://www.gnu.org/licenses/>.
HeaderInclude
config/gcn/gcn-opts.h
Enum
Name(gpu_type) Type(enum processor_type)
GCN GPU type to use:
EnumValue
Enum(gpu_type) String(carrizo) Value(PROCESSOR_CARRIZO)
EnumValue
Enum(gpu_type) String(fiji) Value(PROCESSOR_FIJI)
EnumValue
Enum(gpu_type) String(gfx900) Value(PROCESSOR_VEGA)
march=
Target RejectNegative Joined ToLower Enum(gpu_type) Var(gcn_arch) Init(PROCESSOR_CARRIZO)
Specify the name of the target GPU.
mtune=
Target RejectNegative Joined ToLower Enum(gpu_type) Var(gcn_tune) Init(PROCESSOR_CARRIZO)
Specify the name of the target GPU.
m32
Target Report RejectNegative InverseMask(ABI64)
Generate code for a 32-bit ABI.
m64
Target Report RejectNegative Mask(ABI64)
Generate code for a 64-bit ABI.
mgomp
Target Report RejectNegative
Enable OpenMP GPU offloading.
bool flag_bypass_init_error = false
mbypass-init-error
Target Report RejectNegative Var(flag_bypass_init_error)
bool flag_worker_partitioning = false
macc-experimental-workers
Target Report Var(flag_worker_partitioning) Init(1)
int stack_size_opt = -1
mstack-size=
Target Report RejectNegative Joined UInteger Var(stack_size_opt) Init(-1)
-mstack-size=<number> Set the private segment size per wave-front, in bytes.
mlocal-symbol-id=
Target RejectNegative Report JoinedOrMissing Var(local_symbol_id) Init(0)
Wopenacc-dims
Target Var(warn_openacc_dims) Warning
Warn about invalid OpenACC dimensions.
# Copyright (C) 2016-2019 Free Software Foundation, Inc.
#
# This file is free software; you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free
# Software Foundation; either version 3 of the License, or (at your option)
# any later version.
#
# This file is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# for more details.
#
# You should have received a copy of the GNU General Public License
# along with GCC; see the file COPYING3. If not see
# <http://www.gnu.org/licenses/>.
GTM_H += $(HASH_TABLE_H)
driver-gcn.o: $(srcdir)/config/gcn/driver-gcn.c
$(COMPILE) $<
$(POSTCOMPILE)
CFLAGS-mkoffload.o += $(DRIVER_DEFINES) \
-DGCC_INSTALL_NAME=\"$(GCC_INSTALL_NAME)\"
mkoffload.o: $(srcdir)/config/gcn/mkoffload.c
$(COMPILE) $<
$(POSTCOMPILE)
ALL_HOST_OBJS += mkoffload.o
mkoffload$(exeext): mkoffload.o collect-utils.o libcommon-target.a \
$(LIBIBERTY) $(LIBDEPS)
+$(LINKER) $(ALL_LINKERFLAGS) $(LDFLAGS) -o $@ \
mkoffload.o collect-utils.o libcommon-target.a $(LIBIBERTY) $(LIBS)
CFLAGS-gcn-run.o += -DVERSION_STRING=$(PKGVERSION_s)
COMPILE-gcn-run.o = $(filter-out -fno-rtti,$(COMPILE))
gcn-run.o: $(srcdir)/config/gcn/gcn-run.c
$(COMPILE-gcn-run.o) -x c -std=gnu11 -Wno-error=pedantic $<
$(POSTCOMPILE)
ALL_HOST_OBJS += gcn-run.o
gcn-run$(exeext): gcn-run.o
+$(LINKER) $(ALL_LINKERFLAGS) $(LDFLAGS) -o $@ $< -ldl
MULTILIB_OPTIONS = march=gfx900
MULTILIB_DIRNAMES = gcn5
PASSES_EXTRA += $(srcdir)/config/gcn/gcn-passes.def
gcn-tree.o: $(srcdir)/config/gcn/gcn-tree.c
$(COMPILE) $<
$(POSTCOMPILE)
ALL_HOST_OBJS += gcn-tree.o
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment