LCOV - code coverage report
Current view: top level - gcc - ipa-inline.c (source / functions) Hit Total Coverage
Test: gcc.info Lines: 1281 1374 93.2 %
Date: 2020-04-04 11:58:09 Functions: 47 50 94.0 %
Legend: Lines: hit not hit | Branches: + taken - not taken # not executed Branches: 0 0 -

           Branch data     Line data    Source code
       1                 :            : /* Inlining decision heuristics.
       2                 :            :    Copyright (C) 2003-2020 Free Software Foundation, Inc.
       3                 :            :    Contributed by Jan Hubicka
       4                 :            : 
       5                 :            : This file is part of GCC.
       6                 :            : 
       7                 :            : GCC is free software; you can redistribute it and/or modify it under
       8                 :            : the terms of the GNU General Public License as published by the Free
       9                 :            : Software Foundation; either version 3, or (at your option) any later
      10                 :            : version.
      11                 :            : 
      12                 :            : GCC is distributed in the hope that it will be useful, but WITHOUT ANY
      13                 :            : WARRANTY; without even the implied warranty of MERCHANTABILITY or
      14                 :            : FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
      15                 :            : for more details.
      16                 :            : 
      17                 :            : You should have received a copy of the GNU General Public License
      18                 :            : along with GCC; see the file COPYING3.  If not see
      19                 :            : <http://www.gnu.org/licenses/>.  */
      20                 :            : 
      21                 :            : /*  Inlining decision heuristics
      22                 :            : 
      23                 :            :     The implementation of inliner is organized as follows:
      24                 :            : 
      25                 :            :     inlining heuristics limits
      26                 :            : 
      27                 :            :       can_inline_edge_p allow to check that particular inlining is allowed
      28                 :            :       by the limits specified by user (allowed function growth, growth and so
      29                 :            :       on).
      30                 :            : 
      31                 :            :       Functions are inlined when it is obvious the result is profitable (such
      32                 :            :       as functions called once or when inlining reduce code size).
      33                 :            :       In addition to that we perform inlining of small functions and recursive
      34                 :            :       inlining.
      35                 :            : 
      36                 :            :     inlining heuristics
      37                 :            : 
      38                 :            :        The inliner itself is split into two passes:
      39                 :            : 
      40                 :            :        pass_early_inlining
      41                 :            : 
      42                 :            :          Simple local inlining pass inlining callees into current function.
      43                 :            :          This pass makes no use of whole unit analysis and thus it can do only
      44                 :            :          very simple decisions based on local properties.
      45                 :            : 
      46                 :            :          The strength of the pass is that it is run in topological order
      47                 :            :          (reverse postorder) on the callgraph. Functions are converted into SSA
      48                 :            :          form just before this pass and optimized subsequently. As a result, the
      49                 :            :          callees of the function seen by the early inliner was already optimized
      50                 :            :          and results of early inlining adds a lot of optimization opportunities
      51                 :            :          for the local optimization.
      52                 :            : 
      53                 :            :          The pass handle the obvious inlining decisions within the compilation
      54                 :            :          unit - inlining auto inline functions, inlining for size and
      55                 :            :          flattening.
      56                 :            : 
      57                 :            :          main strength of the pass is the ability to eliminate abstraction
      58                 :            :          penalty in C++ code (via combination of inlining and early
      59                 :            :          optimization) and thus improve quality of analysis done by real IPA
      60                 :            :          optimizers.
      61                 :            : 
      62                 :            :          Because of lack of whole unit knowledge, the pass cannot really make
      63                 :            :          good code size/performance tradeoffs.  It however does very simple
      64                 :            :          speculative inlining allowing code size to grow by
      65                 :            :          EARLY_INLINING_INSNS when callee is leaf function.  In this case the
      66                 :            :          optimizations performed later are very likely to eliminate the cost.
      67                 :            : 
      68                 :            :        pass_ipa_inline
      69                 :            : 
      70                 :            :          This is the real inliner able to handle inlining with whole program
      71                 :            :          knowledge. It performs following steps:
      72                 :            : 
      73                 :            :          1) inlining of small functions.  This is implemented by greedy
      74                 :            :          algorithm ordering all inlinable cgraph edges by their badness and
      75                 :            :          inlining them in this order as long as inline limits allows doing so.
      76                 :            : 
      77                 :            :          This heuristics is not very good on inlining recursive calls. Recursive
      78                 :            :          calls can be inlined with results similar to loop unrolling. To do so,
      79                 :            :          special purpose recursive inliner is executed on function when
      80                 :            :          recursive edge is met as viable candidate.
      81                 :            : 
      82                 :            :          2) Unreachable functions are removed from callgraph.  Inlining leads
      83                 :            :          to devirtualization and other modification of callgraph so functions
      84                 :            :          may become unreachable during the process. Also functions declared as
      85                 :            :          extern inline or virtual functions are removed, since after inlining
      86                 :            :          we no longer need the offline bodies.
      87                 :            : 
      88                 :            :          3) Functions called once and not exported from the unit are inlined.
      89                 :            :          This should almost always lead to reduction of code size by eliminating
      90                 :            :          the need for offline copy of the function.  */
      91                 :            : 
      92                 :            : #include "config.h"
      93                 :            : #include "system.h"
      94                 :            : #include "coretypes.h"
      95                 :            : #include "backend.h"
      96                 :            : #include "target.h"
      97                 :            : #include "rtl.h"
      98                 :            : #include "tree.h"
      99                 :            : #include "gimple.h"
     100                 :            : #include "alloc-pool.h"
     101                 :            : #include "tree-pass.h"
     102                 :            : #include "gimple-ssa.h"
     103                 :            : #include "cgraph.h"
     104                 :            : #include "lto-streamer.h"
     105                 :            : #include "trans-mem.h"
     106                 :            : #include "calls.h"
     107                 :            : #include "tree-inline.h"
     108                 :            : #include "profile.h"
     109                 :            : #include "symbol-summary.h"
     110                 :            : #include "tree-vrp.h"
     111                 :            : #include "ipa-prop.h"
     112                 :            : #include "ipa-fnsummary.h"
     113                 :            : #include "ipa-inline.h"
     114                 :            : #include "ipa-utils.h"
     115                 :            : #include "sreal.h"
     116                 :            : #include "auto-profile.h"
     117                 :            : #include "builtins.h"
     118                 :            : #include "fibonacci_heap.h"
     119                 :            : #include "stringpool.h"
     120                 :            : #include "attribs.h"
     121                 :            : #include "asan.h"
     122                 :            : 
     123                 :            : typedef fibonacci_heap <sreal, cgraph_edge> edge_heap_t;
     124                 :            : typedef fibonacci_node <sreal, cgraph_edge> edge_heap_node_t;
     125                 :            : 
     126                 :            : /* Statistics we collect about inlining algorithm.  */
     127                 :            : static int overall_size;
     128                 :            : static profile_count max_count;
     129                 :            : static profile_count spec_rem;
     130                 :            : 
     131                 :            : /* Return false when inlining edge E would lead to violating
     132                 :            :    limits on function unit growth or stack usage growth.  
     133                 :            : 
     134                 :            :    The relative function body growth limit is present generally
     135                 :            :    to avoid problems with non-linear behavior of the compiler.
     136                 :            :    To allow inlining huge functions into tiny wrapper, the limit
     137                 :            :    is always based on the bigger of the two functions considered.
     138                 :            : 
     139                 :            :    For stack growth limits we always base the growth in stack usage
     140                 :            :    of the callers.  We want to prevent applications from segfaulting
     141                 :            :    on stack overflow when functions with huge stack frames gets
     142                 :            :    inlined. */
     143                 :            : 
     144                 :            : static bool
     145                 :    3664930 : caller_growth_limits (struct cgraph_edge *e)
     146                 :            : {
     147                 :    3664930 :   struct cgraph_node *to = e->caller;
     148                 :    3664930 :   struct cgraph_node *what = e->callee->ultimate_alias_target ();
     149                 :    3664930 :   int newsize;
     150                 :    3664930 :   int limit = 0;
     151                 :    3664930 :   HOST_WIDE_INT stack_size_limit = 0, inlined_stack;
     152                 :    3664930 :   ipa_size_summary *outer_info = ipa_size_summaries->get (to);
     153                 :            : 
     154                 :            :   /* Look for function e->caller is inlined to.  While doing
     155                 :            :      so work out the largest function body on the way.  As
     156                 :            :      described above, we want to base our function growth
     157                 :            :      limits based on that.  Not on the self size of the
     158                 :            :      outer function, not on the self size of inline code
     159                 :            :      we immediately inline to.  This is the most relaxed
     160                 :            :      interpretation of the rule "do not grow large functions
     161                 :            :      too much in order to prevent compiler from exploding".  */
     162                 :    4983380 :   while (true)
     163                 :            :     {
     164                 :    4324160 :       ipa_size_summary *size_info = ipa_size_summaries->get (to);
     165                 :    4324160 :       if (limit < size_info->self_size)
     166                 :            :         limit = size_info->self_size;
     167                 :    4324160 :       if (stack_size_limit < size_info->estimated_self_stack_size)
     168                 :            :         stack_size_limit = size_info->estimated_self_stack_size;
     169                 :    4324160 :       if (to->inlined_to)
     170                 :     659226 :         to = to->callers->caller;
     171                 :            :       else
     172                 :            :         break;
     173                 :     659226 :     }
     174                 :            : 
     175                 :    3664930 :   ipa_fn_summary *what_info = ipa_fn_summaries->get (what);
     176                 :    3664930 :   ipa_size_summary *what_size_info = ipa_size_summaries->get (what);
     177                 :            : 
     178                 :    3664930 :   if (limit < what_size_info->self_size)
     179                 :            :     limit = what_size_info->self_size;
     180                 :            : 
     181                 :    3664930 :   limit += limit * opt_for_fn (to->decl, param_large_function_growth) / 100;
     182                 :            : 
     183                 :            :   /* Check the size after inlining against the function limits.  But allow
     184                 :            :      the function to shrink if it went over the limits by forced inlining.  */
     185                 :    3664930 :   newsize = estimate_size_after_inlining (to, e);
     186                 :    3664930 :   if (newsize >= ipa_size_summaries->get (what)->size
     187                 :    3579210 :       && newsize > opt_for_fn (to->decl, param_large_function_insns)
     188                 :    3685080 :       && newsize > limit)
     189                 :            :     {
     190                 :       1490 :       e->inline_failed = CIF_LARGE_FUNCTION_GROWTH_LIMIT;
     191                 :       1490 :       return false;
     192                 :            :     }
     193                 :            : 
     194                 :    3663440 :   if (!what_info->estimated_stack_size)
     195                 :            :     return true;
     196                 :            : 
     197                 :            :   /* FIXME: Stack size limit often prevents inlining in Fortran programs
     198                 :            :      due to large i/o datastructures used by the Fortran front-end.
     199                 :            :      We ought to ignore this limit when we know that the edge is executed
     200                 :            :      on every invocation of the caller (i.e. its call statement dominates
     201                 :            :      exit block).  We do not track this information, yet.  */
     202                 :    1048140 :   stack_size_limit += ((gcov_type)stack_size_limit
     203                 :     524071 :                        * opt_for_fn (to->decl, param_stack_frame_growth)
     204                 :     524071 :                        / 100);
     205                 :            : 
     206                 :     524071 :   inlined_stack = (ipa_get_stack_frame_offset (to)
     207                 :     524071 :                    + outer_info->estimated_self_stack_size
     208                 :     524071 :                    + what_info->estimated_stack_size);
     209                 :            :   /* Check new stack consumption with stack consumption at the place
     210                 :            :      stack is used.  */
     211                 :     524071 :   if (inlined_stack > stack_size_limit
     212                 :            :       /* If function already has large stack usage from sibling
     213                 :            :          inline call, we can inline, too.
     214                 :            :          This bit overoptimistically assume that we are good at stack
     215                 :            :          packing.  */
     216                 :     319870 :       && inlined_stack > ipa_fn_summaries->get (to)->estimated_stack_size
     217                 :     679325 :       && inlined_stack > opt_for_fn (to->decl, param_large_stack_frame))
     218                 :            :     {
     219                 :      38740 :       e->inline_failed = CIF_LARGE_STACK_FRAME_GROWTH_LIMIT;
     220                 :      38740 :       return false;
     221                 :            :     }
     222                 :            :   return true;
     223                 :            : }
     224                 :            : 
     225                 :            : /* Dump info about why inlining has failed.  */
     226                 :            : 
     227                 :            : static void
     228                 :    3221830 : report_inline_failed_reason (struct cgraph_edge *e)
     229                 :            : {
     230                 :    3221830 :   if (dump_enabled_p ())
     231                 :            :     {
     232                 :       2429 :       dump_printf_loc (MSG_MISSED_OPTIMIZATION, e->call_stmt,
     233                 :            :                        "  not inlinable: %C -> %C, %s\n",
     234                 :            :                        e->caller, e->callee,
     235                 :            :                        cgraph_inline_failed_string (e->inline_failed));
     236                 :       2429 :       if ((e->inline_failed == CIF_TARGET_OPTION_MISMATCH
     237                 :       2429 :            || e->inline_failed == CIF_OPTIMIZATION_MISMATCH)
     238                 :          2 :           && e->caller->lto_file_data
     239                 :       2429 :           && e->callee->ultimate_alias_target ()->lto_file_data)
     240                 :            :         {
     241                 :          0 :           dump_printf_loc (MSG_MISSED_OPTIMIZATION, e->call_stmt,
     242                 :            :                            "  LTO objects: %s, %s\n",
     243                 :          0 :                            e->caller->lto_file_data->file_name,
     244                 :          0 :                            e->callee->ultimate_alias_target ()->lto_file_data->file_name);
     245                 :            :         }
     246                 :       2429 :       if (e->inline_failed == CIF_TARGET_OPTION_MISMATCH)
     247                 :          2 :         if (dump_file)
     248                 :          0 :           cl_target_option_print_diff
     249                 :          0 :             (dump_file, 2, target_opts_for_fn (e->caller->decl),
     250                 :          0 :              target_opts_for_fn (e->callee->ultimate_alias_target ()->decl));
     251                 :       2429 :       if (e->inline_failed == CIF_OPTIMIZATION_MISMATCH)
     252                 :          0 :         if (dump_file)
     253                 :          0 :           cl_optimization_print_diff
     254                 :          0 :             (dump_file, 2, opts_for_fn (e->caller->decl),
     255                 :          0 :              opts_for_fn (e->callee->ultimate_alias_target ()->decl));
     256                 :            :     }
     257                 :    3221830 : }
     258                 :            : 
     259                 :            :  /* Decide whether sanitizer-related attributes allow inlining. */
     260                 :            : 
     261                 :            : static bool
     262                 :    4802660 : sanitize_attrs_match_for_inline_p (const_tree caller, const_tree callee)
     263                 :            : {
     264                 :    4802660 :   if (!caller || !callee)
     265                 :            :     return true;
     266                 :            : 
     267                 :            :   /* Allow inlining always_inline functions into no_sanitize_address
     268                 :            :      functions.  */
     269                 :    4802660 :   if (!sanitize_flags_p (SANITIZE_ADDRESS, caller)
     270                 :    9597150 :       && lookup_attribute ("always_inline", DECL_ATTRIBUTES (callee)))
     271                 :            :     return true;
     272                 :            : 
     273                 :    4741750 :   return ((sanitize_flags_p (SANITIZE_ADDRESS, caller)
     274                 :    4741750 :            == sanitize_flags_p (SANITIZE_ADDRESS, callee))
     275                 :    9483410 :           && (sanitize_flags_p (SANITIZE_POINTER_COMPARE, caller)
     276                 :    4741700 :               == sanitize_flags_p (SANITIZE_POINTER_COMPARE, callee))
     277                 :   14225200 :           && (sanitize_flags_p (SANITIZE_POINTER_SUBTRACT, caller)
     278                 :    4741700 :               == sanitize_flags_p (SANITIZE_POINTER_SUBTRACT, callee)));
     279                 :            : }
     280                 :            : 
     281                 :            : /* Used for flags where it is safe to inline when caller's value is
     282                 :            :    grater than callee's.  */
     283                 :            : #define check_maybe_up(flag) \
     284                 :            :       (opts_for_fn (caller->decl)->x_##flag               \
     285                 :            :        != opts_for_fn (callee->decl)->x_##flag            \
     286                 :            :        && (!always_inline                               \
     287                 :            :            || opts_for_fn (caller->decl)->x_##flag        \
     288                 :            :               < opts_for_fn (callee->decl)->x_##flag))
     289                 :            : /* Used for flags where it is safe to inline when caller's value is
     290                 :            :    smaller than callee's.  */
     291                 :            : #define check_maybe_down(flag) \
     292                 :            :       (opts_for_fn (caller->decl)->x_##flag               \
     293                 :            :        != opts_for_fn (callee->decl)->x_##flag            \
     294                 :            :        && (!always_inline                               \
     295                 :            :            || opts_for_fn (caller->decl)->x_##flag        \
     296                 :            :               > opts_for_fn (callee->decl)->x_##flag))
     297                 :            : /* Used for flags where exact match is needed for correctness.  */
     298                 :            : #define check_match(flag) \
     299                 :            :       (opts_for_fn (caller->decl)->x_##flag               \
     300                 :            :        != opts_for_fn (callee->decl)->x_##flag)
     301                 :            : 
     302                 :            : /* Decide if we can inline the edge and possibly update
     303                 :            :    inline_failed reason.  
     304                 :            :    We check whether inlining is possible at all and whether
     305                 :            :    caller growth limits allow doing so.  
     306                 :            : 
     307                 :            :    if REPORT is true, output reason to the dump file. */
     308                 :            : 
     309                 :            : static bool
     310                 :    7610700 : can_inline_edge_p (struct cgraph_edge *e, bool report,
     311                 :            :                    bool early = false)
     312                 :            : {
     313                 :    7610700 :   gcc_checking_assert (e->inline_failed);
     314                 :            : 
     315                 :    7610700 :   if (cgraph_inline_failed_type (e->inline_failed) == CIF_FINAL_ERROR)
     316                 :            :     {
     317                 :    2449630 :       if (report)
     318                 :    2400650 :         report_inline_failed_reason (e);
     319                 :    2449630 :       return false;
     320                 :            :     }
     321                 :            : 
     322                 :    5161070 :   bool inlinable = true;
     323                 :    5161070 :   enum availability avail;
     324                 :   10322100 :   cgraph_node *caller = (e->caller->inlined_to
     325                 :    5161070 :                          ? e->caller->inlined_to : e->caller);
     326                 :    5161070 :   cgraph_node *callee = e->callee->ultimate_alias_target (&avail, caller);
     327                 :            : 
     328                 :    5161070 :   if (!callee->definition)
     329                 :            :     {
     330                 :        359 :       e->inline_failed = CIF_BODY_NOT_AVAILABLE;
     331                 :        359 :       inlinable = false;
     332                 :            :     }
     333                 :    5161070 :   if (!early && (!opt_for_fn (callee->decl, optimize)
     334                 :    2713140 :                  || !opt_for_fn (caller->decl, optimize)))
     335                 :            :     {
     336                 :         55 :       e->inline_failed = CIF_FUNCTION_NOT_OPTIMIZED;
     337                 :         55 :       inlinable = false;
     338                 :            :     }
     339                 :    5161020 :   else if (callee->calls_comdat_local)
     340                 :            :     {
     341                 :      11286 :       e->inline_failed = CIF_USES_COMDAT_LOCAL;
     342                 :      11286 :       inlinable = false;
     343                 :            :     }
     344                 :    5149730 :   else if (avail <= AVAIL_INTERPOSABLE)
     345                 :            :     {
     346                 :     100977 :       e->inline_failed = CIF_OVERWRITABLE;
     347                 :     100977 :       inlinable = false;
     348                 :            :     }
     349                 :            :   /* All edges with call_stmt_cannot_inline_p should have inline_failed
     350                 :            :      initialized to one of FINAL_ERROR reasons.  */
     351                 :    5048760 :   else if (e->call_stmt_cannot_inline_p)
     352                 :          0 :     gcc_unreachable ();
     353                 :            :   /* Don't inline if the functions have different EH personalities.  */
     354                 :    5048760 :   else if (DECL_FUNCTION_PERSONALITY (caller->decl)
     355                 :    1623420 :            && DECL_FUNCTION_PERSONALITY (callee->decl)
     356                 :    5139400 :            && (DECL_FUNCTION_PERSONALITY (caller->decl)
     357                 :      90643 :                != DECL_FUNCTION_PERSONALITY (callee->decl)))
     358                 :            :     {
     359                 :          0 :       e->inline_failed = CIF_EH_PERSONALITY;
     360                 :          0 :       inlinable = false;
     361                 :            :     }
     362                 :            :   /* TM pure functions should not be inlined into non-TM_pure
     363                 :            :      functions.  */
     364                 :    5048760 :   else if (is_tm_pure (callee->decl) && !is_tm_pure (caller->decl))
     365                 :            :     {
     366                 :         25 :       e->inline_failed = CIF_UNSPECIFIED;
     367                 :         25 :       inlinable = false;
     368                 :            :     }
     369                 :            :   /* Check compatibility of target optimization options.  */
     370                 :    5048730 :   else if (!targetm.target_option.can_inline_p (caller->decl,
     371                 :            :                                                 callee->decl))
     372                 :            :     {
     373                 :        309 :       e->inline_failed = CIF_TARGET_OPTION_MISMATCH;
     374                 :        309 :       inlinable = false;
     375                 :            :     }
     376                 :    5048420 :   else if (ipa_fn_summaries->get (callee) == NULL
     377                 :    5048420 :            || !ipa_fn_summaries->get (callee)->inlinable)
     378                 :            :     {
     379                 :     245765 :       e->inline_failed = CIF_FUNCTION_NOT_INLINABLE;
     380                 :     245765 :       inlinable = false;
     381                 :            :     }
     382                 :            :   /* Don't inline a function with mismatched sanitization attributes. */
     383                 :    4802660 :   else if (!sanitize_attrs_match_for_inline_p (caller->decl, callee->decl))
     384                 :            :     {
     385                 :         49 :       e->inline_failed = CIF_ATTRIBUTE_MISMATCH;
     386                 :         49 :       inlinable = false;
     387                 :            :     }
     388                 :    5161070 :   if (!inlinable && report)
     389                 :     358457 :     report_inline_failed_reason (e);
     390                 :            :   return inlinable;
     391                 :            : }
     392                 :            : 
     393                 :            : /* Return inlining_insns_single limit for function N. If HINT is true
     394                 :            :    scale up the bound.  */
     395                 :            : 
     396                 :            : static int
     397                 :    2934080 : inline_insns_single (cgraph_node *n, bool hint)
     398                 :            : {
     399                 :    2934080 :   if (hint)
     400                 :    1208820 :     return opt_for_fn (n->decl, param_max_inline_insns_single)
     401                 :    1208820 :            * opt_for_fn (n->decl, param_inline_heuristics_hint_percent) / 100;
     402                 :    1725260 :   return opt_for_fn (n->decl, param_max_inline_insns_single);
     403                 :            : }
     404                 :            : 
     405                 :            : /* Return inlining_insns_auto limit for function N. If HINT is true
     406                 :            :    scale up the bound.   */
     407                 :            : 
     408                 :            : static int
     409                 :    2067860 : inline_insns_auto (cgraph_node *n, bool hint)
     410                 :            : {
     411                 :     213976 :   int max_inline_insns_auto = opt_for_fn (n->decl, param_max_inline_insns_auto);
     412                 :     510888 :   if (hint)
     413                 :    1352480 :     return max_inline_insns_auto
     414                 :       9548 :            * opt_for_fn (n->decl, param_inline_heuristics_hint_percent) / 100;
     415                 :            :   return max_inline_insns_auto;
     416                 :            : }
     417                 :            : 
     418                 :            : /* Decide if we can inline the edge and possibly update
     419                 :            :    inline_failed reason.  
     420                 :            :    We check whether inlining is possible at all and whether
     421                 :            :    caller growth limits allow doing so.  
     422                 :            : 
     423                 :            :    if REPORT is true, output reason to the dump file.
     424                 :            : 
     425                 :            :    if DISREGARD_LIMITS is true, ignore size limits.  */
     426                 :            : 
     427                 :            : static bool
     428                 :    3731110 : can_inline_edge_by_limits_p (struct cgraph_edge *e, bool report,
     429                 :            :                              bool disregard_limits = false, bool early = false)
     430                 :            : {
     431                 :    3731110 :   gcc_checking_assert (e->inline_failed);
     432                 :            : 
     433                 :    3731110 :   if (cgraph_inline_failed_type (e->inline_failed) == CIF_FINAL_ERROR)
     434                 :            :     {
     435                 :        182 :       if (report)
     436                 :        182 :         report_inline_failed_reason (e);
     437                 :        182 :       return false;
     438                 :            :     }
     439                 :            : 
     440                 :    3730920 :   bool inlinable = true;
     441                 :    3730920 :   enum availability avail;
     442                 :    7461850 :   cgraph_node *caller = (e->caller->inlined_to
     443                 :    3730920 :                          ? e->caller->inlined_to : e->caller);
     444                 :    3730920 :   cgraph_node *callee = e->callee->ultimate_alias_target (&avail, caller);
     445                 :    3730920 :   tree caller_tree = DECL_FUNCTION_SPECIFIC_OPTIMIZATION (caller->decl);
     446                 :    3730920 :   tree callee_tree
     447                 :    3730920 :     = callee ? DECL_FUNCTION_SPECIFIC_OPTIMIZATION (callee->decl) : NULL;
     448                 :            :   /* Check if caller growth allows the inlining.  */
     449                 :    3730920 :   if (!DECL_DISREGARD_INLINE_LIMITS (callee->decl)
     450                 :    3668030 :       && !disregard_limits
     451                 :    3665070 :       && !lookup_attribute ("flatten",
     452                 :    3665070 :                  DECL_ATTRIBUTES (caller->decl))
     453                 :    7395860 :       && !caller_growth_limits (e))
     454                 :            :     inlinable = false;
     455                 :    3690700 :   else if (callee->externally_visible
     456                 :    2423380 :            && !DECL_DISREGARD_INLINE_LIMITS (callee->decl)
     457                 :    6081720 :            && flag_live_patching == LIVE_PATCHING_INLINE_ONLY_STATIC)
     458                 :            :     {
     459                 :          2 :       e->inline_failed = CIF_EXTERN_LIVE_ONLY_STATIC;
     460                 :          2 :       inlinable = false;
     461                 :            :     }
     462                 :            :   /* Don't inline a function with a higher optimization level than the
     463                 :            :      caller.  FIXME: this is really just tip of iceberg of handling
     464                 :            :      optimization attribute.  */
     465                 :    3690690 :   else if (caller_tree != callee_tree)
     466                 :            :     {
     467                 :       4175 :       bool always_inline =
     468                 :       4175 :              (DECL_DISREGARD_INLINE_LIMITS (callee->decl)
     469                 :         16 :               && lookup_attribute ("always_inline",
     470                 :       4175 :                                    DECL_ATTRIBUTES (callee->decl)));
     471                 :       4175 :       ipa_fn_summary *caller_info = ipa_fn_summaries->get (caller);
     472                 :       4175 :       ipa_fn_summary *callee_info = ipa_fn_summaries->get (callee);
     473                 :            : 
     474                 :            :      /* Until GCC 4.9 we did not check the semantics-altering flags
     475                 :            :         below and inlined across optimization boundaries.
     476                 :            :         Enabling checks below breaks several packages by refusing
     477                 :            :         to inline library always_inline functions. See PR65873.
     478                 :            :         Disable the check for early inlining for now until better solution
     479                 :            :         is found.  */
     480                 :       4175 :      if (always_inline && early)
     481                 :            :         ;
     482                 :            :       /* There are some options that change IL semantics which means
     483                 :            :          we cannot inline in these cases for correctness reason.
     484                 :            :          Not even for always_inline declared functions.  */
     485                 :       4159 :      else if (check_match (flag_wrapv)
     486                 :       4159 :               || check_match (flag_trapv)
     487                 :       4159 :               || check_match (flag_pcc_struct_return)
     488                 :            :               /* When caller or callee does FP math, be sure FP codegen flags
     489                 :            :                  compatible.  */
     490                 :       4159 :               || ((caller_info->fp_expressions && callee_info->fp_expressions)
     491                 :          2 :                   && (check_maybe_up (flag_rounding_math)
     492                 :          2 :                       || check_maybe_up (flag_trapping_math)
     493                 :          0 :                       || check_maybe_down (flag_unsafe_math_optimizations)
     494                 :          0 :                       || check_maybe_down (flag_finite_math_only)
     495                 :          0 :                       || check_maybe_up (flag_signaling_nans)
     496                 :          0 :                       || check_maybe_down (flag_cx_limited_range)
     497                 :          0 :                       || check_maybe_up (flag_signed_zeros)
     498                 :          0 :                       || check_maybe_down (flag_associative_math)
     499                 :          0 :                       || check_maybe_down (flag_reciprocal_math)
     500                 :          0 :                       || check_maybe_down (flag_fp_int_builtin_inexact)
     501                 :            :                       /* Strictly speaking only when the callee contains function
     502                 :            :                          calls that may end up setting errno.  */
     503                 :          0 :                       || check_maybe_up (flag_errno_math)))
     504                 :            :               /* We do not want to make code compiled with exceptions to be
     505                 :            :                  brought into a non-EH function unless we know that the callee
     506                 :            :                  does not throw.
     507                 :            :                  This is tracked by DECL_FUNCTION_PERSONALITY.  */
     508                 :       4157 :               || (check_maybe_up (flag_non_call_exceptions)
     509                 :          0 :                   && DECL_FUNCTION_PERSONALITY (callee->decl))
     510                 :       4157 :               || (check_maybe_up (flag_exceptions)
     511                 :         16 :                   && DECL_FUNCTION_PERSONALITY (callee->decl))
     512                 :            :               /* When devirtualization is disabled for callee, it is not safe
     513                 :            :                  to inline it as we possibly mangled the type info.
     514                 :            :                  Allow early inlining of always inlines.  */
     515                 :       8316 :               || (!early && check_maybe_down (flag_devirtualize)))
     516                 :            :         {
     517                 :          9 :           e->inline_failed = CIF_OPTIMIZATION_MISMATCH;
     518                 :          9 :           inlinable = false;
     519                 :            :         }
     520                 :            :       /* gcc.dg/pr43564.c.  Apply user-forced inline even at -O0.  */
     521                 :       4150 :       else if (always_inline)
     522                 :            :         ;
     523                 :            :       /* When user added an attribute to the callee honor it.  */
     524                 :       4150 :       else if (lookup_attribute ("optimize", DECL_ATTRIBUTES (callee->decl))
     525                 :       4150 :                && opts_for_fn (caller->decl) != opts_for_fn (callee->decl))
     526                 :            :         {
     527                 :         20 :           e->inline_failed = CIF_OPTIMIZATION_MISMATCH;
     528                 :         20 :           inlinable = false;
     529                 :            :         }
     530                 :            :       /* If explicit optimize attribute are not used, the mismatch is caused
     531                 :            :          by different command line options used to build different units.
     532                 :            :          Do not care about COMDAT functions - those are intended to be
     533                 :            :          optimized with the optimization flags of module they are used in.
     534                 :            :          Also do not care about mixing up size/speed optimization when
     535                 :            :          DECL_DISREGARD_INLINE_LIMITS is set.  */
     536                 :       4130 :       else if ((callee->merged_comdat
     537                 :          0 :                 && !lookup_attribute ("optimize",
     538                 :          0 :                                       DECL_ATTRIBUTES (caller->decl)))
     539                 :       4130 :                || DECL_DISREGARD_INLINE_LIMITS (callee->decl))
     540                 :            :         ;
     541                 :            :       /* If mismatch is caused by merging two LTO units with different
     542                 :            :          optimization flags we want to be bit nicer.  However never inline
     543                 :            :          if one of functions is not optimized at all.  */
     544                 :       4130 :       else if (!opt_for_fn (callee->decl, optimize)
     545                 :       4130 :                || !opt_for_fn (caller->decl, optimize))
     546                 :            :         {
     547                 :          0 :           e->inline_failed = CIF_OPTIMIZATION_MISMATCH;
     548                 :          0 :           inlinable = false;
     549                 :            :         }
     550                 :            :       /* If callee is optimized for size and caller is not, allow inlining if
     551                 :            :          code shrinks or we are in param_max_inline_insns_single limit and
     552                 :            :          callee is inline (and thus likely an unified comdat).
     553                 :            :          This will allow caller to run faster.  */
     554                 :       4130 :       else if (opt_for_fn (callee->decl, optimize_size)
     555                 :       4130 :                > opt_for_fn (caller->decl, optimize_size))
     556                 :            :         {
     557                 :         97 :           int growth = estimate_edge_growth (e);
     558                 :         97 :           if (growth > opt_for_fn (caller->decl, param_max_inline_insns_size)
     559                 :         97 :               && (!DECL_DECLARED_INLINE_P (callee->decl)
     560                 :         57 :                   && growth >= MAX (inline_insns_single (caller, false),
     561                 :            :                                     inline_insns_auto (caller, false))))
     562                 :            :             {
     563                 :          0 :               e->inline_failed = CIF_OPTIMIZATION_MISMATCH;
     564                 :          0 :               inlinable = false;
     565                 :            :             }
     566                 :            :         }
     567                 :            :       /* If callee is more aggressively optimized for performance than caller,
     568                 :            :          we generally want to inline only cheap (runtime wise) functions.  */
     569                 :       4033 :       else if (opt_for_fn (callee->decl, optimize_size)
     570                 :            :                < opt_for_fn (caller->decl, optimize_size)
     571                 :       4033 :                || (opt_for_fn (callee->decl, optimize)
     572                 :            :                    > opt_for_fn (caller->decl, optimize)))
     573                 :            :         {
     574                 :      10779 :           if (estimate_edge_time (e)
     575                 :       3593 :               >= 20 + ipa_call_summaries->get (e)->call_stmt_time)
     576                 :            :             {
     577                 :       1419 :               e->inline_failed = CIF_OPTIMIZATION_MISMATCH;
     578                 :       1419 :               inlinable = false;
     579                 :            :             }
     580                 :            :         }
     581                 :            : 
     582                 :            :     }
     583                 :            : 
     584                 :    3730920 :   if (!inlinable && report)
     585                 :      39922 :     report_inline_failed_reason (e);
     586                 :            :   return inlinable;
     587                 :            : }
     588                 :            : 
     589                 :            : 
     590                 :            : /* Return true if the edge E is inlinable during early inlining.  */
     591                 :            : 
     592                 :            : static bool
     593                 :    2448080 : can_early_inline_edge_p (struct cgraph_edge *e)
     594                 :            : {
     595                 :    2448080 :   struct cgraph_node *callee = e->callee->ultimate_alias_target ();
     596                 :            :   /* Early inliner might get called at WPA stage when IPA pass adds new
     597                 :            :      function.  In this case we cannot really do any of early inlining
     598                 :            :      because function bodies are missing.  */
     599                 :    2448080 :   if (cgraph_inline_failed_type (e->inline_failed) == CIF_FINAL_ERROR)
     600                 :            :     return false;
     601                 :    2447880 :   if (!gimple_has_body_p (callee->decl))
     602                 :            :     {
     603                 :          0 :       e->inline_failed = CIF_BODY_NOT_AVAILABLE;
     604                 :          0 :       return false;
     605                 :            :     }
     606                 :            :   /* In early inliner some of callees may not be in SSA form yet
     607                 :            :      (i.e. the callgraph is cyclic and we did not process
     608                 :            :      the callee by early inliner, yet).  We don't have CIF code for this
     609                 :            :      case; later we will re-do the decision in the real inliner.  */
     610                 :    2447880 :   if (!gimple_in_ssa_p (DECL_STRUCT_FUNCTION (e->caller->decl))
     611                 :    2447880 :       || !gimple_in_ssa_p (DECL_STRUCT_FUNCTION (callee->decl)))
     612                 :            :     {
     613                 :          0 :       if (dump_enabled_p ())
     614                 :          0 :         dump_printf_loc (MSG_MISSED_OPTIMIZATION, e->call_stmt,
     615                 :            :                          "  edge not inlinable: not in SSA form\n");
     616                 :          0 :       return false;
     617                 :            :     }
     618                 :    2447880 :   if (!can_inline_edge_p (e, true, true)
     619                 :    2447880 :       || !can_inline_edge_by_limits_p (e, true, false, true))
     620                 :      69248 :     return false;
     621                 :            :   return true;
     622                 :            : }
     623                 :            : 
     624                 :            : 
     625                 :            : /* Return number of calls in N.  Ignore cheap builtins.  */
     626                 :            : 
     627                 :            : static int
     628                 :     494493 : num_calls (struct cgraph_node *n)
     629                 :            : {
     630                 :     494493 :   struct cgraph_edge *e;
     631                 :     494493 :   int num = 0;
     632                 :            : 
     633                 :     980005 :   for (e = n->callees; e; e = e->next_callee)
     634                 :     485512 :     if (!is_inexpensive_builtin (e->callee->decl))
     635                 :     476515 :       num++;
     636                 :     494493 :   return num;
     637                 :            : }
     638                 :            : 
     639                 :            : 
     640                 :            : /* Return true if we are interested in inlining small function.  */
     641                 :            : 
     642                 :            : static bool
     643                 :    2314610 : want_early_inline_function_p (struct cgraph_edge *e)
     644                 :            : {
     645                 :    2314610 :   bool want_inline = true;
     646                 :    2314610 :   struct cgraph_node *callee = e->callee->ultimate_alias_target ();
     647                 :            : 
     648                 :    2314610 :   if (DECL_DISREGARD_INLINE_LIMITS (callee->decl))
     649                 :            :     ;
     650                 :            :   /* For AutoFDO, we need to make sure that before profile summary, all
     651                 :            :      hot paths' IR look exactly the same as profiled binary. As a result,
     652                 :            :      in einliner, we will disregard size limit and inline those callsites
     653                 :            :      that are:
     654                 :            :        * inlined in the profiled binary, and
     655                 :            :        * the cloned callee has enough samples to be considered "hot".  */
     656                 :    2314600 :   else if (flag_auto_profile && afdo_callsite_hot_enough_for_early_inline (e))
     657                 :            :     ;
     658                 :    2314600 :   else if (!DECL_DECLARED_INLINE_P (callee->decl)
     659                 :    2314600 :            && !opt_for_fn (e->caller->decl, flag_inline_small_functions))
     660                 :            :     {
     661                 :        133 :       e->inline_failed = CIF_FUNCTION_NOT_INLINE_CANDIDATE;
     662                 :        133 :       report_inline_failed_reason (e);
     663                 :        133 :       want_inline = false;
     664                 :            :     }
     665                 :            :   else
     666                 :            :     {
     667                 :            :       /* First take care of very large functions.  */
     668                 :    2314470 :       int min_growth = estimate_min_edge_growth (e), growth = 0;
     669                 :    2314470 :       int n;
     670                 :    2314470 :       int early_inlining_insns = param_early_inlining_insns;
     671                 :            : 
     672                 :    2314470 :       if (min_growth > early_inlining_insns)
     673                 :            :         {
     674                 :     286991 :           if (dump_enabled_p ())
     675                 :         40 :             dump_printf_loc (MSG_MISSED_OPTIMIZATION, e->call_stmt,
     676                 :            :                              "  will not early inline: %C->%C, "
     677                 :            :                              "call is cold and code would grow "
     678                 :            :                              "at least by %i\n",
     679                 :            :                              e->caller, callee,
     680                 :            :                              min_growth);
     681                 :            :           want_inline = false;
     682                 :            :         }
     683                 :            :       else
     684                 :    2027480 :         growth = estimate_edge_growth (e);
     685                 :            : 
     686                 :            : 
     687                 :    2314470 :       if (!want_inline || growth <= param_max_inline_insns_size)
     688                 :            :         ;
     689                 :     609908 :       else if (!e->maybe_hot_p ())
     690                 :            :         {
     691                 :      11616 :           if (dump_enabled_p ())
     692                 :          0 :             dump_printf_loc (MSG_MISSED_OPTIMIZATION, e->call_stmt,
     693                 :            :                              "  will not early inline: %C->%C, "
     694                 :            :                              "call is cold and code would grow by %i\n",
     695                 :            :                              e->caller, callee,
     696                 :            :                              growth);
     697                 :            :           want_inline = false;
     698                 :            :         }
     699                 :     598292 :       else if (growth > early_inlining_insns)
     700                 :            :         {
     701                 :     103799 :           if (dump_enabled_p ())
     702                 :          0 :             dump_printf_loc (MSG_MISSED_OPTIMIZATION, e->call_stmt,
     703                 :            :                              "  will not early inline: %C->%C, "
     704                 :            :                              "growth %i exceeds --param early-inlining-insns\n",
     705                 :            :                              e->caller, callee, growth);
     706                 :            :           want_inline = false;
     707                 :            :         }
     708                 :     494493 :       else if ((n = num_calls (callee)) != 0
     709                 :     494493 :                && growth * (n + 1) > early_inlining_insns)
     710                 :            :         {
     711                 :     139009 :           if (dump_enabled_p ())
     712                 :         11 :             dump_printf_loc (MSG_MISSED_OPTIMIZATION, e->call_stmt,
     713                 :            :                              "  will not early inline: %C->%C, "
     714                 :            :                              "growth %i exceeds --param early-inlining-insns "
     715                 :            :                              "divided by number of calls\n",
     716                 :            :                              e->caller, callee, growth);
     717                 :            :           want_inline = false;
     718                 :            :         }
     719                 :            :     }
     720                 :    2314610 :   return want_inline;
     721                 :            : }
     722                 :            : 
     723                 :            : /* Compute time of the edge->caller + edge->callee execution when inlining
     724                 :            :    does not happen.  */
     725                 :            : 
     726                 :            : inline sreal
     727                 :     205794 : compute_uninlined_call_time (struct cgraph_edge *edge,
     728                 :            :                              sreal uninlined_call_time,
     729                 :            :                              sreal freq)
     730                 :            : {
     731                 :     411588 :   cgraph_node *caller = (edge->caller->inlined_to
     732                 :     205794 :                          ? edge->caller->inlined_to
     733                 :            :                          : edge->caller);
     734                 :            : 
     735                 :     205794 :   if (freq > 0)
     736                 :     199606 :     uninlined_call_time *= freq;
     737                 :            :   else
     738                 :       6188 :     uninlined_call_time = uninlined_call_time >> 11;
     739                 :            : 
     740                 :     205794 :   sreal caller_time = ipa_fn_summaries->get (caller)->time;
     741                 :     205794 :   return uninlined_call_time + caller_time;
     742                 :            : }
     743                 :            : 
     744                 :            : /* Same as compute_uinlined_call_time but compute time when inlining
     745                 :            :    does happen.  */
     746                 :            : 
     747                 :            : inline sreal
     748                 :     205794 : compute_inlined_call_time (struct cgraph_edge *edge,
     749                 :            :                            sreal time,
     750                 :            :                            sreal freq)
     751                 :            : {
     752                 :     411588 :   cgraph_node *caller = (edge->caller->inlined_to
     753                 :     205794 :                          ? edge->caller->inlined_to
     754                 :            :                          : edge->caller);
     755                 :     205794 :   sreal caller_time = ipa_fn_summaries->get (caller)->time;
     756                 :            : 
     757                 :     205794 :   if (freq > 0)
     758                 :     199606 :     time *= freq;
     759                 :            :   else
     760                 :       6188 :     time = time >> 11;
     761                 :            : 
     762                 :            :   /* This calculation should match one in ipa-inline-analysis.c
     763                 :            :      (estimate_edge_size_and_time).  */
     764                 :     205794 :   time -= (sreal)ipa_call_summaries->get (edge)->call_stmt_time * freq;
     765                 :     205794 :   time += caller_time;
     766                 :     205794 :   if (time <= 0)
     767                 :         95 :     time = ((sreal) 1) >> 8;
     768                 :     205794 :   gcc_checking_assert (time >= 0);
     769                 :     205794 :   return time;
     770                 :            : }
     771                 :            : 
     772                 :            : /* Determine time saved by inlining EDGE of frequency FREQ
     773                 :            :    where callee's runtime w/o inlining is UNINLINED_TYPE
     774                 :            :    and with inlined is INLINED_TYPE.  */
     775                 :            : 
     776                 :            : inline sreal
     777                 :    3405640 : inlining_speedup (struct cgraph_edge *edge,
     778                 :            :                   sreal freq,
     779                 :            :                   sreal uninlined_time,
     780                 :            :                   sreal inlined_time)
     781                 :            : {
     782                 :    3405640 :   sreal speedup = uninlined_time - inlined_time;
     783                 :            :   /* Handling of call_time should match one in ipa-inline-fnsummary.c
     784                 :            :      (estimate_edge_size_and_time).  */
     785                 :    3405640 :   sreal call_time = ipa_call_summaries->get (edge)->call_stmt_time;
     786                 :            : 
     787                 :    3405640 :   if (freq > 0)
     788                 :            :     {
     789                 :    3395590 :       speedup = (speedup + call_time);
     790                 :    4556290 :       if (freq != 1)
     791                 :    2234890 :        speedup = speedup * freq;
     792                 :            :     }
     793                 :      10050 :   else if (freq == 0)
     794                 :      10050 :     speedup = speedup >> 11;
     795                 :    3405640 :   gcc_checking_assert (speedup >= 0);
     796                 :    3405640 :   return speedup;
     797                 :            : }
     798                 :            : 
     799                 :            : /* Return true if the speedup for inlining E is bigger than
     800                 :            :    PARAM_MAX_INLINE_MIN_SPEEDUP.  */
     801                 :            : 
     802                 :            : static bool
     803                 :     205794 : big_speedup_p (struct cgraph_edge *e)
     804                 :            : {
     805                 :     205794 :   sreal unspec_time;
     806                 :     205794 :   sreal spec_time = estimate_edge_time (e, &unspec_time);
     807                 :     205794 :   sreal freq = e->sreal_frequency ();
     808                 :     205794 :   sreal time = compute_uninlined_call_time (e, unspec_time, freq);
     809                 :     205794 :   sreal inlined_time = compute_inlined_call_time (e, spec_time, freq);
     810                 :     411588 :   cgraph_node *caller = (e->caller->inlined_to
     811                 :     205794 :                          ? e->caller->inlined_to
     812                 :            :                          : e->caller);
     813                 :     205794 :   int limit = opt_for_fn (caller->decl, param_inline_min_speedup);
     814                 :            : 
     815                 :     205794 :   if ((time - inlined_time) * 100 > time * limit)
     816                 :      11882 :     return true;
     817                 :            :   return false;
     818                 :            : }
     819                 :            : 
     820                 :            : /* Return true if we are interested in inlining small function.
     821                 :            :    When REPORT is true, report reason to dump file.  */
     822                 :            : 
     823                 :            : static bool
     824                 :    2273870 : want_inline_small_function_p (struct cgraph_edge *e, bool report)
     825                 :            : {
     826                 :    2273870 :   bool want_inline = true;
     827                 :    2273870 :   struct cgraph_node *callee = e->callee->ultimate_alias_target ();
     828                 :    4547740 :   cgraph_node *to  = (e->caller->inlined_to
     829                 :    2273870 :                       ? e->caller->inlined_to : e->caller);
     830                 :            : 
     831                 :            :   /* Allow this function to be called before can_inline_edge_p,
     832                 :            :      since it's usually cheaper.  */
     833                 :    2273870 :   if (cgraph_inline_failed_type (e->inline_failed) == CIF_FINAL_ERROR)
     834                 :            :     want_inline = false;
     835                 :    2273870 :   else if (DECL_DISREGARD_INLINE_LIMITS (callee->decl))
     836                 :            :     ;
     837                 :    2268840 :   else if (!DECL_DECLARED_INLINE_P (callee->decl)
     838                 :    2268840 :            && !opt_for_fn (e->caller->decl, flag_inline_small_functions))
     839                 :            :     {
     840                 :      26771 :       e->inline_failed = CIF_FUNCTION_NOT_INLINE_CANDIDATE;
     841                 :      26771 :       want_inline = false;
     842                 :            :     }
     843                 :            :   /* Do fast and conservative check if the function can be good
     844                 :            :      inline candidate.  */
     845                 :    2242070 :   else if ((!DECL_DECLARED_INLINE_P (callee->decl)
     846                 :    1089180 :            && (!e->count.ipa ().initialized_p () || !e->maybe_hot_p ()))
     847                 :    3331180 :            && ipa_fn_summaries->get (callee)->min_size
     848                 :    1089100 :                 - ipa_call_summaries->get (e)->call_stmt_size
     849                 :    1089100 :               > inline_insns_auto (e->caller, true))
     850                 :            :     {
     851                 :     543111 :       e->inline_failed = CIF_MAX_INLINE_INSNS_AUTO_LIMIT;
     852                 :     543111 :       want_inline = false;
     853                 :            :     }
     854                 :    1698960 :   else if ((DECL_DECLARED_INLINE_P (callee->decl)
     855                 :     554046 :             || e->count.ipa ().nonzero_p ())
     856                 :    2852200 :            && ipa_fn_summaries->get (callee)->min_size
     857                 :    1153060 :                 - ipa_call_summaries->get (e)->call_stmt_size
     858                 :    1153060 :               > inline_insns_single (e->caller, true))
     859                 :            :     {
     860                 :      20434 :       e->inline_failed = (DECL_DECLARED_INLINE_P (callee->decl)
     861                 :      20434 :                           ? CIF_MAX_INLINE_INSNS_SINGLE_LIMIT
     862                 :      20434 :                           : CIF_MAX_INLINE_INSNS_AUTO_LIMIT);
     863                 :      20434 :       want_inline = false;
     864                 :            :     }
     865                 :            :   else
     866                 :            :     {
     867                 :    1678530 :       int growth = estimate_edge_growth (e);
     868                 :    1678530 :       ipa_hints hints = estimate_edge_hints (e);
     869                 :    1678530 :       bool apply_hints = (hints & (INLINE_HINT_indirect_call
     870                 :            :                                    | INLINE_HINT_known_hot
     871                 :            :                                    | INLINE_HINT_loop_iterations
     872                 :            :                                    | INLINE_HINT_loop_stride));
     873                 :            : 
     874                 :    1678530 :       if (growth <= opt_for_fn (to->decl,
     875                 :            :                                 param_max_inline_insns_size))
     876                 :            :         ;
     877                 :            :       /* Apply param_max_inline_insns_single limit.  Do not do so when
     878                 :            :          hints suggests that inlining given function is very profitable.
     879                 :            :          Avoid computation of big_speedup_p when not necessary to change
     880                 :            :          outcome of decision.  */
     881                 :    1631970 :       else if (DECL_DECLARED_INLINE_P (callee->decl)
     882                 :    1115740 :                && growth >= inline_insns_single (e->caller, apply_hints)
     883                 :    1684420 :                && (apply_hints
     884                 :      52445 :                    || growth >= inline_insns_single (e->caller, true)
     885                 :      49917 :                    || !big_speedup_p (e)))
     886                 :            :         {
     887                 :      52386 :           e->inline_failed = CIF_MAX_INLINE_INSNS_SINGLE_LIMIT;
     888                 :      52386 :           want_inline = false;
     889                 :            :         }
     890                 :    1579580 :       else if (!DECL_DECLARED_INLINE_P (callee->decl)
     891                 :     516227 :                && !opt_for_fn (e->caller->decl, flag_inline_functions)
     892                 :    1584920 :                && growth >= opt_for_fn (to->decl,
     893                 :            :                                         param_max_inline_insns_small))
     894                 :            :         {
     895                 :            :           /* growth_positive_p is expensive, always test it last.  */
     896                 :       5339 :           if (growth >= inline_insns_single (e->caller, false)
     897                 :       5339 :               || growth_positive_p (callee, e, growth))
     898                 :            :             {
     899                 :       4991 :               e->inline_failed = CIF_NOT_DECLARED_INLINED;
     900                 :       4991 :               want_inline = false;
     901                 :            :             }
     902                 :            :         }
     903                 :            :       /* Apply param_max_inline_insns_auto limit for functions not declared
     904                 :            :          inline.  Bypass the limit when speedup seems big.  */
     905                 :    1574250 :       else if (!DECL_DECLARED_INLINE_P (callee->decl)
     906                 :    1021780 :                && growth >= inline_insns_auto (e->caller, apply_hints)
     907                 :    1829960 :                && (apply_hints
     908                 :     253832 :                    || growth >= inline_insns_auto (e->caller, true)
     909                 :     155683 :                    || !big_speedup_p (e)))
     910                 :            :         {
     911                 :            :           /* growth_positive_p is expensive, always test it last.  */
     912                 :     244064 :           if (growth >= inline_insns_single (e->caller, false)
     913                 :     244064 :               || growth_positive_p (callee, e, growth))
     914                 :            :             {
     915                 :     199842 :               e->inline_failed = CIF_MAX_INLINE_INSNS_AUTO_LIMIT;
     916                 :     199842 :               want_inline = false;
     917                 :            :             }
     918                 :            :         }
     919                 :            :       /* If call is cold, do not inline when function body would grow. */
     920                 :    1330180 :       else if (!e->maybe_hot_p ()
     921                 :    1330180 :                && (growth >= inline_insns_single (e->caller, false)
     922                 :     274087 :                    || growth_positive_p (callee, e, growth)))
     923                 :            :         {
     924                 :     225413 :           e->inline_failed = CIF_UNLIKELY_CALL;
     925                 :     225413 :           want_inline = false;
     926                 :            :         }
     927                 :            :     }
     928                 :    2273870 :   if (!want_inline && report)
     929                 :     351869 :     report_inline_failed_reason (e);
     930                 :    2273870 :   return want_inline;
     931                 :            : }
     932                 :            : 
     933                 :            : /* EDGE is self recursive edge.
     934                 :            :    We handle two cases - when function A is inlining into itself
     935                 :            :    or when function A is being inlined into another inliner copy of function
     936                 :            :    A within function B.  
     937                 :            : 
     938                 :            :    In first case OUTER_NODE points to the toplevel copy of A, while
     939                 :            :    in the second case OUTER_NODE points to the outermost copy of A in B.
     940                 :            : 
     941                 :            :    In both cases we want to be extra selective since
     942                 :            :    inlining the call will just introduce new recursive calls to appear.  */
     943                 :            : 
     944                 :            : static bool
     945                 :      14636 : want_inline_self_recursive_call_p (struct cgraph_edge *edge,
     946                 :            :                                    struct cgraph_node *outer_node,
     947                 :            :                                    bool peeling,
     948                 :            :                                    int depth)
     949                 :            : {
     950                 :      14636 :   char const *reason = NULL;
     951                 :      14636 :   bool want_inline = true;
     952                 :      14636 :   sreal caller_freq = 1;
     953                 :      14636 :   int max_depth = opt_for_fn (outer_node->decl,
     954                 :            :                               param_max_inline_recursive_depth_auto);
     955                 :            : 
     956                 :      14636 :   if (DECL_DECLARED_INLINE_P (edge->caller->decl))
     957                 :       2543 :     max_depth = opt_for_fn (outer_node->decl,
     958                 :            :                             param_max_inline_recursive_depth);
     959                 :            : 
     960                 :      14636 :   if (!edge->maybe_hot_p ())
     961                 :            :     {
     962                 :            :       reason = "recursive call is cold";
     963                 :            :       want_inline = false;
     964                 :            :     }
     965                 :      14587 :   else if (depth > max_depth)
     966                 :            :     {
     967                 :            :       reason = "--param max-inline-recursive-depth exceeded.";
     968                 :            :       want_inline = false;
     969                 :            :     }
     970                 :      12422 :   else if (outer_node->inlined_to
     971                 :      15259 :            && (caller_freq = outer_node->callers->sreal_frequency ()) == 0)
     972                 :            :     {
     973                 :          0 :       reason = "caller frequency is 0";
     974                 :          0 :       want_inline = false;
     975                 :            :     }
     976                 :            : 
     977                 :      14636 :   if (!want_inline)
     978                 :            :     ;
     979                 :            :   /* Inlining of self recursive function into copy of itself within other
     980                 :            :      function is transformation similar to loop peeling.
     981                 :            : 
     982                 :            :      Peeling is profitable if we can inline enough copies to make probability
     983                 :            :      of actual call to the self recursive function very small.  Be sure that
     984                 :            :      the probability of recursion is small.
     985                 :            : 
     986                 :            :      We ensure that the frequency of recursing is at most 1 - (1/max_depth).
     987                 :            :      This way the expected number of recursion is at most max_depth.  */
     988                 :      12422 :   else if (peeling)
     989                 :            :     {
     990                 :       2837 :       sreal max_prob = (sreal)1 - ((sreal)1 / (sreal)max_depth);
     991                 :       2837 :       int i;
     992                 :       6673 :       for (i = 1; i < depth; i++)
     993                 :       3836 :         max_prob = max_prob * max_prob;
     994                 :       2837 :       if (edge->sreal_frequency () >= max_prob * caller_freq)
     995                 :            :         {
     996                 :       1270 :           reason = "frequency of recursive call is too large";
     997                 :       1270 :           want_inline = false;
     998                 :            :         }
     999                 :            :     }
    1000                 :            :   /* Recursive inlining, i.e. equivalent of unrolling, is profitable if
    1001                 :            :      recursion depth is large.  We reduce function call overhead and increase
    1002                 :            :      chances that things fit in hardware return predictor.
    1003                 :            : 
    1004                 :            :      Recursive inlining might however increase cost of stack frame setup
    1005                 :            :      actually slowing down functions whose recursion tree is wide rather than
    1006                 :            :      deep.
    1007                 :            : 
    1008                 :            :      Deciding reliably on when to do recursive inlining without profile feedback
    1009                 :            :      is tricky.  For now we disable recursive inlining when probability of self
    1010                 :            :      recursion is low. 
    1011                 :            : 
    1012                 :            :      Recursive inlining of self recursive call within loop also results in
    1013                 :            :      large loop depths that generally optimize badly.  We may want to throttle
    1014                 :            :      down inlining in those cases.  In particular this seems to happen in one
    1015                 :            :      of libstdc++ rb tree methods.  */
    1016                 :            :   else
    1017                 :            :     {
    1018                 :       9585 :       if (edge->sreal_frequency () * 100
    1019                 :       9585 :           <= caller_freq
    1020                 :      19170 :              * opt_for_fn (outer_node->decl,
    1021                 :            :                            param_min_inline_recursive_probability))
    1022                 :            :         {
    1023                 :        395 :           reason = "frequency of recursive call is too small";
    1024                 :        395 :           want_inline = false;
    1025                 :            :         }
    1026                 :            :     }
    1027                 :      14636 :   if (!want_inline && dump_enabled_p ())
    1028                 :          9 :     dump_printf_loc (MSG_MISSED_OPTIMIZATION, edge->call_stmt,
    1029                 :            :                      "   not inlining recursively: %s\n", reason);
    1030                 :      14636 :   return want_inline;
    1031                 :            : }
    1032                 :            : 
    1033                 :            : /* Return true when NODE has uninlinable caller;
    1034                 :            :    set HAS_HOT_CALL if it has hot call. 
    1035                 :            :    Worker for cgraph_for_node_and_aliases.  */
    1036                 :            : 
    1037                 :            : static bool
    1038                 :      58897 : check_callers (struct cgraph_node *node, void *has_hot_call)
    1039                 :            : {
    1040                 :      58897 :   struct cgraph_edge *e;
    1041                 :     100880 :    for (e = node->callers; e; e = e->next_caller)
    1042                 :            :      {
    1043                 :      61549 :        if (!opt_for_fn (e->caller->decl, flag_inline_functions_called_once)
    1044                 :      61549 :            || !opt_for_fn (e->caller->decl, optimize))
    1045                 :            :          return true;
    1046                 :      61549 :        if (!can_inline_edge_p (e, true))
    1047                 :            :          return true;
    1048                 :      61549 :        if (e->recursive_p ())
    1049                 :            :          return true;
    1050                 :      61549 :        if (!can_inline_edge_by_limits_p (e, true))
    1051                 :            :          return true;
    1052                 :      41983 :        if (!(*(bool *)has_hot_call) && e->maybe_hot_p ())
    1053                 :       8983 :          *(bool *)has_hot_call = true;
    1054                 :            :      }
    1055                 :            :   return false;
    1056                 :            : }
    1057                 :            : 
    1058                 :            : /* If NODE has a caller, return true.  */
    1059                 :            : 
    1060                 :            : static bool
    1061                 :    1435990 : has_caller_p (struct cgraph_node *node, void *data ATTRIBUTE_UNUSED)
    1062                 :            : {
    1063                 :    1435990 :   if (node->callers)
    1064                 :     512463 :     return true;
    1065                 :            :   return false;
    1066                 :            : }
    1067                 :            : 
    1068                 :            : /* Decide if inlining NODE would reduce unit size by eliminating
    1069                 :            :    the offline copy of function.  
    1070                 :            :    When COLD is true the cold calls are considered, too.  */
    1071                 :            : 
    1072                 :            : static bool
    1073                 :    2604310 : want_inline_function_to_all_callers_p (struct cgraph_node *node, bool cold)
    1074                 :            : {
    1075                 :    2604310 :   bool has_hot_call = false;
    1076                 :            : 
    1077                 :            :   /* Aliases gets inlined along with the function they alias.  */
    1078                 :    2604310 :   if (node->alias)
    1079                 :            :     return false;
    1080                 :            :   /* Already inlined?  */
    1081                 :    2540500 :   if (node->inlined_to)
    1082                 :            :     return false;
    1083                 :            :   /* Does it have callers?  */
    1084                 :    1397430 :   if (!node->call_for_symbol_and_aliases (has_caller_p, NULL, true))
    1085                 :            :     return false;
    1086                 :            :   /* Inlining into all callers would increase size?  */
    1087                 :     512463 :   if (growth_positive_p (node, NULL, INT_MIN) > 0)
    1088                 :            :     return false;
    1089                 :            :   /* All inlines must be possible.  */
    1090                 :      55944 :   if (node->call_for_symbol_and_aliases (check_callers, &has_hot_call,
    1091                 :            :                                          true))
    1092                 :            :     return false;
    1093                 :      36378 :   if (!cold && !has_hot_call)
    1094                 :      14016 :     return false;
    1095                 :            :   return true;
    1096                 :            : }
    1097                 :            : 
    1098                 :            : /* Return true if WHERE of SIZE is a possible candidate for wrapper heuristics
    1099                 :            :    in estimate_edge_badness.  */
    1100                 :            : 
    1101                 :            : static bool
    1102                 :     303222 : wrapper_heuristics_may_apply (struct cgraph_node *where, int size)
    1103                 :            : {
    1104                 :     303222 :   return size < (DECL_DECLARED_INLINE_P (where->decl)
    1105                 :      89246 :                  ? inline_insns_single (where, false)
    1106                 :     517198 :                  : inline_insns_auto (where, false));
    1107                 :            : }
    1108                 :            : 
    1109                 :            : /* A cost model driving the inlining heuristics in a way so the edges with
    1110                 :            :    smallest badness are inlined first.  After each inlining is performed
    1111                 :            :    the costs of all caller edges of nodes affected are recomputed so the
    1112                 :            :    metrics may accurately depend on values such as number of inlinable callers
    1113                 :            :    of the function or function body size.  */
    1114                 :            : 
    1115                 :            : static sreal
    1116                 :    3568080 : edge_badness (struct cgraph_edge *edge, bool dump)
    1117                 :            : {
    1118                 :    3568080 :   sreal badness;
    1119                 :    3568080 :   int growth;
    1120                 :    3568080 :   sreal edge_time, unspec_edge_time;
    1121                 :    3568080 :   struct cgraph_node *callee = edge->callee->ultimate_alias_target ();
    1122                 :    3568080 :   class ipa_fn_summary *callee_info = ipa_fn_summaries->get (callee);
    1123                 :    3568080 :   ipa_hints hints;
    1124                 :    7136160 :   cgraph_node *caller = (edge->caller->inlined_to
    1125                 :    3568080 :                          ? edge->caller->inlined_to
    1126                 :            :                          : edge->caller);
    1127                 :            : 
    1128                 :    3568080 :   growth = estimate_edge_growth (edge);
    1129                 :    3568080 :   edge_time = estimate_edge_time (edge, &unspec_edge_time);
    1130                 :    3568080 :   hints = estimate_edge_hints (edge);
    1131                 :    3568080 :   gcc_checking_assert (edge_time >= 0);
    1132                 :            :   /* Check that inlined time is better, but tolerate some roundoff issues.
    1133                 :            :      FIXME: When callee profile drops to 0 we account calls more.  This
    1134                 :            :      should be fixed by never doing that.  */
    1135                 :    3568080 :   gcc_checking_assert ((edge_time * 100
    1136                 :            :                         - callee_info->time * 101).to_int () <= 0
    1137                 :            :                         || callee->count.ipa ().initialized_p ());
    1138                 :    3568080 :   gcc_checking_assert (growth <= ipa_size_summaries->get (callee)->size);
    1139                 :            : 
    1140                 :    3568080 :   if (dump)
    1141                 :            :     {
    1142                 :        194 :       fprintf (dump_file, "    Badness calculation for %s -> %s\n",
    1143                 :        194 :                edge->caller->dump_name (),
    1144                 :        194 :                edge->callee->dump_name ());
    1145                 :        194 :       fprintf (dump_file, "      size growth %i, time %f unspec %f ",
    1146                 :            :                growth,
    1147                 :            :                edge_time.to_double (),
    1148                 :            :                unspec_edge_time.to_double ());
    1149                 :        194 :       ipa_dump_hints (dump_file, hints);
    1150                 :        194 :       if (big_speedup_p (edge))
    1151                 :        169 :         fprintf (dump_file, " big_speedup");
    1152                 :        194 :       fprintf (dump_file, "\n");
    1153                 :            :     }
    1154                 :            : 
    1155                 :            :   /* Always prefer inlining saving code size.  */
    1156                 :    3568080 :   if (growth <= 0)
    1157                 :            :     {
    1158                 :     140597 :       badness = (sreal) (-SREAL_MIN_SIG + growth) << (SREAL_MAX_EXP / 256);
    1159                 :     140597 :       if (dump)
    1160                 :        115 :         fprintf (dump_file, "      %f: Growth %d <= 0\n", badness.to_double (),
    1161                 :            :                  growth);
    1162                 :            :     }
    1163                 :            :    /* Inlining into EXTERNAL functions is not going to change anything unless
    1164                 :            :       they are themselves inlined.  */
    1165                 :    3427480 :    else if (DECL_EXTERNAL (caller->decl))
    1166                 :            :     {
    1167                 :      20557 :       if (dump)
    1168                 :          0 :         fprintf (dump_file, "      max: function is external\n");
    1169                 :      20557 :       return sreal::max ();
    1170                 :            :     }
    1171                 :            :   /* When profile is available. Compute badness as:
    1172                 :            :      
    1173                 :            :                  time_saved * caller_count
    1174                 :            :      goodness =  -------------------------------------------------
    1175                 :            :                  growth_of_caller * overall_growth * combined_size
    1176                 :            : 
    1177                 :            :      badness = - goodness
    1178                 :            : 
    1179                 :            :      Again use negative value to make calls with profile appear hotter
    1180                 :            :      then calls without.
    1181                 :            :   */
    1182                 :    3406930 :   else if (opt_for_fn (caller->decl, flag_guess_branch_prob)
    1183                 :    3406930 :            || caller->count.ipa ().nonzero_p ())
    1184                 :            :     {
    1185                 :    3405560 :       sreal numerator, denominator;
    1186                 :    3405560 :       int overall_growth;
    1187                 :    3405560 :       sreal freq = edge->sreal_frequency ();
    1188                 :            : 
    1189                 :    3405560 :       numerator = inlining_speedup (edge, freq, unspec_edge_time, edge_time);
    1190                 :    3405560 :       if (numerator <= 0)
    1191                 :       1894 :         numerator = ((sreal) 1 >> 8);
    1192                 :    3405560 :       if (caller->count.ipa ().nonzero_p ())
    1193                 :         73 :         numerator *= caller->count.ipa ().to_gcov_type ();
    1194                 :    3405490 :       else if (caller->count.ipa ().initialized_p ())
    1195                 :        566 :         numerator = numerator >> 11;
    1196                 :    3405560 :       denominator = growth;
    1197                 :            : 
    1198                 :    3405560 :       overall_growth = callee_info->growth;
    1199                 :            : 
    1200                 :            :       /* Look for inliner wrappers of the form:
    1201                 :            : 
    1202                 :            :          inline_caller ()
    1203                 :            :            {
    1204                 :            :              do_fast_job...
    1205                 :            :              if (need_more_work)
    1206                 :            :                noninline_callee ();
    1207                 :            :            }
    1208                 :            :          Without penalizing this case, we usually inline noninline_callee
    1209                 :            :          into the inline_caller because overall_growth is small preventing
    1210                 :            :          further inlining of inline_caller.
    1211                 :            : 
    1212                 :            :          Penalize only callgraph edges to functions with small overall
    1213                 :            :          growth ...
    1214                 :            :         */
    1215                 :    3405560 :       if (growth > overall_growth
    1216                 :            :           /* ... and having only one caller which is not inlined ... */
    1217                 :     842979 :           && callee_info->single_caller
    1218                 :     514112 :           && !edge->caller->inlined_to
    1219                 :            :           /* ... and edges executed only conditionally ... */
    1220                 :     763406 :           && freq < 1
    1221                 :            :           /* ... consider case where callee is not inline but caller is ... */
    1222                 :    3570890 :           && ((!DECL_DECLARED_INLINE_P (edge->callee->decl)
    1223                 :      64648 :                && DECL_DECLARED_INLINE_P (caller->decl))
    1224                 :            :               /* ... or when early optimizers decided to split and edge
    1225                 :            :                  frequency still indicates splitting is a win ... */
    1226                 :     160019 :               || (callee->split_part && !caller->split_part
    1227                 :      39686 :                   && freq * 100
    1228                 :    3480980 :                          < opt_for_fn (caller->decl,
    1229                 :            :                                        param_partial_inlining_entry_probability)
    1230                 :            :                   /* ... and do not overwrite user specified hints.   */
    1231                 :      39407 :                   && (!DECL_DECLARED_INLINE_P (edge->callee->decl)
    1232                 :      26402 :                       || DECL_DECLARED_INLINE_P (caller->decl)))))
    1233                 :            :         {
    1234                 :      43360 :           ipa_fn_summary *caller_info = ipa_fn_summaries->get (caller);
    1235                 :      43360 :           int caller_growth = caller_info->growth;
    1236                 :            : 
    1237                 :            :           /* Only apply the penalty when caller looks like inline candidate,
    1238                 :            :              and it is not called once.  */
    1239                 :      23828 :           if (!caller_info->single_caller && overall_growth < caller_growth
    1240                 :      22943 :               && caller_info->inlinable
    1241                 :      66297 :               && wrapper_heuristics_may_apply
    1242                 :      22937 :                  (caller, ipa_size_summaries->get (caller)->size))
    1243                 :            :             {
    1244                 :      18001 :               if (dump)
    1245                 :          1 :                 fprintf (dump_file,
    1246                 :            :                          "     Wrapper penalty. Increasing growth %i to %i\n",
    1247                 :            :                          overall_growth, caller_growth);
    1248                 :            :               overall_growth = caller_growth;
    1249                 :            :             }
    1250                 :            :         }
    1251                 :    3405560 :       if (overall_growth > 0)
    1252                 :            :         {
    1253                 :            :           /* Strongly prefer functions with few callers that can be inlined
    1254                 :            :              fully.  The square root here leads to smaller binaries at average.
    1255                 :            :              Watch however for extreme cases and return to linear function
    1256                 :            :              when growth is large.  */
    1257                 :    2823610 :           if (overall_growth < 256)
    1258                 :    1702980 :             overall_growth *= overall_growth;
    1259                 :            :           else
    1260                 :    1120630 :             overall_growth += 256 * 256 - 256;
    1261                 :    2823610 :           denominator *= overall_growth;
    1262                 :            :         }
    1263                 :    3405560 :       denominator *= ipa_size_summaries->get (caller)->size + growth;
    1264                 :            : 
    1265                 :    3405560 :       badness = - numerator / denominator;
    1266                 :            : 
    1267                 :    3405560 :       if (dump)
    1268                 :            :         {
    1269                 :        316 :           fprintf (dump_file,
    1270                 :            :                    "      %f: guessed profile. frequency %f, count %" PRId64
    1271                 :            :                    " caller count %" PRId64
    1272                 :            :                    " time saved %f"
    1273                 :            :                    " overall growth %i (current) %i (original)"
    1274                 :            :                    " %i (compensated)\n",
    1275                 :            :                    badness.to_double (),
    1276                 :            :                    freq.to_double (),
    1277                 :         79 :                    edge->count.ipa ().initialized_p () ? edge->count.ipa ().to_gcov_type () : -1,
    1278                 :         79 :                    caller->count.ipa ().initialized_p () ? caller->count.ipa ().to_gcov_type () : -1,
    1279                 :        158 :                    inlining_speedup (edge, freq, unspec_edge_time, edge_time).to_double (),
    1280                 :            :                    estimate_growth (callee),
    1281                 :            :                    callee_info->growth, overall_growth);
    1282                 :            :         }
    1283                 :            :     }
    1284                 :            :   /* When function local profile is not available or it does not give
    1285                 :            :      useful information (i.e. frequency is zero), base the cost on
    1286                 :            :      loop nest and overall size growth, so we optimize for overall number
    1287                 :            :      of functions fully inlined in program.  */
    1288                 :            :   else
    1289                 :            :     {
    1290                 :       1363 :       int nest = MIN (ipa_call_summaries->get (edge)->loop_depth, 8);
    1291                 :       1363 :       badness = growth;
    1292                 :            : 
    1293                 :            :       /* Decrease badness if call is nested.  */
    1294                 :       1363 :       if (badness > 0)
    1295                 :       1363 :         badness = badness >> nest;
    1296                 :            :       else
    1297                 :          0 :         badness = badness << nest;
    1298                 :       1363 :       if (dump)
    1299                 :          0 :         fprintf (dump_file, "      %f: no profile. nest %i\n",
    1300                 :            :                  badness.to_double (), nest);
    1301                 :            :     }
    1302                 :    3547520 :   gcc_checking_assert (badness != 0);
    1303                 :            : 
    1304                 :    3547520 :   if (edge->recursive_p ())
    1305                 :      12131 :     badness = badness.shift (badness > 0 ? 4 : -4);
    1306                 :    3547520 :   if ((hints & (INLINE_HINT_indirect_call
    1307                 :            :                 | INLINE_HINT_loop_iterations
    1308                 :            :                 | INLINE_HINT_loop_stride))
    1309                 :    3481090 :       || callee_info->growth <= 0)
    1310                 :    1584840 :     badness = badness.shift (badness > 0 ? -2 : 2);
    1311                 :    3547520 :   if (hints & (INLINE_HINT_same_scc))
    1312                 :      57088 :     badness = badness.shift (badness > 0 ? 3 : -3);
    1313                 :    3518980 :   else if (hints & (INLINE_HINT_in_scc))
    1314                 :      70058 :     badness = badness.shift (badness > 0 ? 2 : -2);
    1315                 :    3483950 :   else if (hints & (INLINE_HINT_cross_module))
    1316                 :       2368 :     badness = badness.shift (badness > 0 ? 1 : -1);
    1317                 :    3547520 :   if (DECL_DISREGARD_INLINE_LIMITS (callee->decl))
    1318                 :      15072 :     badness = badness.shift (badness > 0 ? -4 : 4);
    1319                 :    3539990 :   else if ((hints & INLINE_HINT_declared_inline))
    1320                 :    5449880 :     badness = badness.shift (badness > 0 ? -3 : 3);
    1321                 :    3547520 :   if (dump)
    1322                 :        194 :     fprintf (dump_file, "      Adjusted by hints %f\n", badness.to_double ());
    1323                 :    3547520 :   return badness;
    1324                 :            : }
    1325                 :            : 
    1326                 :            : /* Recompute badness of EDGE and update its key in HEAP if needed.  */
    1327                 :            : static inline void
    1328                 :    1562650 : update_edge_key (edge_heap_t *heap, struct cgraph_edge *edge)
    1329                 :            : {
    1330                 :    1562650 :   sreal badness = edge_badness (edge, false);
    1331                 :    1562650 :   if (edge->aux)
    1332                 :            :     {
    1333                 :    1055470 :       edge_heap_node_t *n = (edge_heap_node_t *) edge->aux;
    1334                 :    1055470 :       gcc_checking_assert (n->get_data () == edge);
    1335                 :            : 
    1336                 :            :       /* fibonacci_heap::replace_key does busy updating of the
    1337                 :            :          heap that is unnecessarily expensive.
    1338                 :            :          We do lazy increases: after extracting minimum if the key
    1339                 :            :          turns out to be out of date, it is re-inserted into heap
    1340                 :            :          with correct value.  */
    1341                 :    2110930 :       if (badness < n->get_key ())
    1342                 :            :         {
    1343                 :      49981 :           if (dump_file && (dump_flags & TDF_DETAILS))
    1344                 :            :             {
    1345                 :        118 :               fprintf (dump_file,
    1346                 :            :                        "  decreasing badness %s -> %s, %f to %f\n",
    1347                 :         59 :                        edge->caller->dump_name (),
    1348                 :         59 :                        edge->callee->dump_name (),
    1349                 :        118 :                        n->get_key ().to_double (),
    1350                 :            :                        badness.to_double ());
    1351                 :            :             }
    1352                 :      49981 :           heap->decrease_key (n, badness);
    1353                 :            :         }
    1354                 :            :     }
    1355                 :            :   else
    1356                 :            :     {
    1357                 :     507184 :        if (dump_file && (dump_flags & TDF_DETAILS))
    1358                 :            :          {
    1359                 :        314 :            fprintf (dump_file,
    1360                 :            :                     "  enqueuing call %s -> %s, badness %f\n",
    1361                 :        157 :                     edge->caller->dump_name (),
    1362                 :        157 :                     edge->callee->dump_name (),
    1363                 :            :                     badness.to_double ());
    1364                 :            :          }
    1365                 :     507184 :       edge->aux = heap->insert (badness, edge);
    1366                 :            :     }
    1367                 :    1562650 : }
    1368                 :            : 
    1369                 :            : 
    1370                 :            : /* NODE was inlined.
    1371                 :            :    All caller edges needs to be reset because
    1372                 :            :    size estimates change. Similarly callees needs reset
    1373                 :            :    because better context may be known.  */
    1374                 :            : 
    1375                 :            : static void
    1376                 :     466725 : reset_edge_caches (struct cgraph_node *node)
    1377                 :            : {
    1378                 :     466725 :   struct cgraph_edge *edge;
    1379                 :     466725 :   struct cgraph_edge *e = node->callees;
    1380                 :     466725 :   struct cgraph_node *where = node;
    1381                 :     466725 :   struct ipa_ref *ref;
    1382                 :            : 
    1383                 :     466725 :   if (where->inlined_to)
    1384                 :     432315 :     where = where->inlined_to;
    1385                 :            : 
    1386                 :     466725 :   reset_node_cache (where);
    1387                 :            : 
    1388                 :     466725 :   if (edge_growth_cache != NULL)
    1389                 :    1378890 :     for (edge = where->callers; edge; edge = edge->next_caller)
    1390                 :     912309 :       if (edge->inline_failed)
    1391                 :     912309 :         edge_growth_cache->remove (edge);
    1392                 :            : 
    1393                 :     566283 :   FOR_EACH_ALIAS (where, ref)
    1394                 :      65290 :     reset_edge_caches (dyn_cast <cgraph_node *> (ref->referring));
    1395                 :            : 
    1396                 :     466725 :   if (!e)
    1397                 :            :     return;
    1398                 :            : 
    1399                 :    1075740 :   while (true)
    1400                 :    1075740 :     if (!e->inline_failed && e->callee->callees)
    1401                 :            :       e = e->callee->callees;
    1402                 :            :     else
    1403                 :            :       {
    1404                 :     901410 :         if (edge_growth_cache != NULL && e->inline_failed)
    1405                 :     862795 :           edge_growth_cache->remove (e);
    1406                 :     901410 :         if (e->next_callee)
    1407                 :            :           e = e->next_callee;
    1408                 :            :         else
    1409                 :            :           {
    1410                 :     557698 :             do
    1411                 :            :               {
    1412                 :     557698 :                 if (e->caller == node)
    1413                 :            :                   return;
    1414                 :     174329 :                 e = e->caller->callers;
    1415                 :            :               }
    1416                 :     174329 :             while (!e->next_callee);
    1417                 :            :             e = e->next_callee;
    1418                 :            :           }
    1419                 :            :       }
    1420                 :            : }
    1421                 :            : 
    1422                 :            : /* Recompute HEAP nodes for each of caller of NODE.
    1423                 :            :    UPDATED_NODES track nodes we already visited, to avoid redundant work.
    1424                 :            :    When CHECK_INLINABLITY_FOR is set, re-check for specified edge that
    1425                 :            :    it is inlinable. Otherwise check all edges.  */
    1426                 :            : 
    1427                 :            : static void
    1428                 :     466402 : update_caller_keys (edge_heap_t *heap, struct cgraph_node *node,
    1429                 :            :                     bitmap updated_nodes,
    1430                 :            :                     struct cgraph_edge *check_inlinablity_for)
    1431                 :            : {
    1432                 :     466402 :   struct cgraph_edge *edge;
    1433                 :     466402 :   struct ipa_ref *ref;
    1434                 :            : 
    1435                 :     900379 :   if ((!node->alias && !ipa_fn_summaries->get (node)->inlinable)
    1436                 :     461444 :       || node->inlined_to)
    1437                 :            :     return;
    1438                 :     461444 :   if (!bitmap_set_bit (updated_nodes, node->get_uid ()))
    1439                 :            :     return;
    1440                 :            : 
    1441                 :     560524 :   FOR_EACH_ALIAS (node, ref)
    1442                 :            :     {
    1443                 :      32425 :       struct cgraph_node *alias = dyn_cast <cgraph_node *> (ref->referring);
    1444                 :      32425 :       update_caller_keys (heap, alias, updated_nodes, check_inlinablity_for);
    1445                 :            :     }
    1446                 :            : 
    1447                 :    1282100 :   for (edge = node->callers; edge; edge = edge->next_caller)
    1448                 :     820660 :     if (edge->inline_failed)
    1449                 :            :       {
    1450                 :     820660 :         if (!check_inlinablity_for
    1451                 :     820660 :             || check_inlinablity_for == edge)
    1452                 :            :           {
    1453                 :     820660 :             if (can_inline_edge_p (edge, false)
    1454                 :     778007 :                 && want_inline_small_function_p (edge, false)
    1455                 :    1003390 :                 && can_inline_edge_by_limits_p (edge, false))
    1456                 :     181538 :               update_edge_key (heap, edge);
    1457                 :     639122 :             else if (edge->aux)
    1458                 :            :               {
    1459                 :      37141 :                 report_inline_failed_reason (edge);
    1460                 :      37141 :                 heap->delete_node ((edge_heap_node_t *) edge->aux);
    1461                 :      37141 :                 edge->aux = NULL;
    1462                 :            :               }
    1463                 :            :           }
    1464                 :          0 :         else if (edge->aux)
    1465                 :          0 :           update_edge_key (heap, edge);
    1466                 :            :       }
    1467                 :            : }
    1468                 :            : 
    1469                 :            : /* Recompute HEAP nodes for each uninlined call in NODE
    1470                 :            :    If UPDATE_SINCE is non-NULL check if edges called within that function
    1471                 :            :    are inlinable (typically UPDATE_SINCE is the inline clone we introduced
    1472                 :            :    where all edges have new context).
    1473                 :            :   
    1474                 :            :    This is used when we know that edge badnesses are going only to increase
    1475                 :            :    (we introduced new call site) and thus all we need is to insert newly
    1476                 :            :    created edges into heap.  */
    1477                 :            : 
    1478                 :            : static void
    1479                 :     434006 : update_callee_keys (edge_heap_t *heap, struct cgraph_node *node,
    1480                 :            :                     struct cgraph_node *update_since,
    1481                 :            :                     bitmap updated_nodes)
    1482                 :            : {
    1483                 :     434006 :   struct cgraph_edge *e = node->callees;
    1484                 :     434006 :   bool check_inlinability = update_since == node;
    1485                 :            : 
    1486                 :     434006 :   if (!e)
    1487                 :            :     return;
    1488                 :    6674670 :   while (true)
    1489                 :    6674670 :     if (!e->inline_failed && e->callee->callees)
    1490                 :            :       {
    1491                 :    1525770 :         if (e->callee == update_since)
    1492                 :     197163 :           check_inlinability = true;
    1493                 :            :         e = e->callee->callees;
    1494                 :            :       }
    1495                 :            :     else
    1496                 :            :       {
    1497                 :    5148900 :         enum availability avail;
    1498                 :    5148900 :         struct cgraph_node *callee;
    1499                 :    5148900 :         if (!check_inlinability)
    1500                 :            :           {
    1501                 :    4276810 :             if (e->aux
    1502                 :    5078860 :                 && !bitmap_bit_p (updated_nodes,
    1503                 :     802050 :                                   e->callee->ultimate_alias_target
    1504                 :     802050 :                                     (&avail, e->caller)->get_uid ()))
    1505                 :     802050 :               update_edge_key (heap, e);
    1506                 :            :           }
    1507                 :            :         /* We do not reset callee growth cache here.  Since we added a new call,
    1508                 :            :            growth should have just increased and consequently badness metric
    1509                 :            :            don't need updating.  */
    1510                 :     872087 :         else if (e->inline_failed
    1511                 :     845425 :                  && (callee = e->callee->ultimate_alias_target (&avail,
    1512                 :     845425 :                                                                 e->caller))
    1513                 :     845425 :                  && avail >= AVAIL_AVAILABLE
    1514                 :     267467 :                  && ipa_fn_summaries->get (callee) != NULL
    1515                 :     267465 :                  && ipa_fn_summaries->get (callee)->inlinable
    1516                 :    1135750 :                  && !bitmap_bit_p (updated_nodes, callee->get_uid ()))
    1517                 :            :           {
    1518                 :     263660 :             if (can_inline_edge_p (e, false)
    1519                 :     257325 :                 && want_inline_small_function_p (e, false)
    1520                 :     395182 :                 && can_inline_edge_by_limits_p (e, false))
    1521                 :            :               {
    1522                 :     130957 :                 gcc_checking_assert (check_inlinability || can_inline_edge_p (e, false));
    1523                 :     130957 :                 gcc_checking_assert (check_inlinability || e->aux);
    1524                 :     130957 :                 update_edge_key (heap, e);
    1525                 :            :               }
    1526                 :     132703 :             else if (e->aux)
    1527                 :            :               {
    1528                 :       2776 :                 report_inline_failed_reason (e);
    1529                 :       2776 :                 heap->delete_node ((edge_heap_node_t *) e->aux);
    1530                 :       2776 :                 e->aux = NULL;
    1531                 :            :               }
    1532                 :            :           }
    1533                 :            :         /* In case we redirected to unreachable node we only need to remove the
    1534                 :            :            fibheap entry.  */
    1535                 :     608427 :         else if (e->aux)
    1536                 :            :           {
    1537                 :        487 :             heap->delete_node ((edge_heap_node_t *) e->aux);
    1538                 :        487 :             e->aux = NULL;
    1539                 :            :           }
    1540                 :    5148900 :         if (e->next_callee)
    1541                 :            :           e = e->next_callee;
    1542                 :            :         else
    1543                 :            :           {
    1544                 :    1940920 :             do
    1545                 :            :               {
    1546                 :    1940920 :                 if (e->caller == node)
    1547                 :     415151 :                   return;
    1548                 :    1525770 :                 if (e->caller == update_since)
    1549                 :     197163 :                   check_inlinability = false;
    1550                 :    1525770 :                 e = e->caller->callers;
    1551                 :            :               }
    1552                 :    1525770 :             while (!e->next_callee);
    1553                 :            :             e = e->next_callee;
    1554                 :            :           }
    1555                 :            :       }
    1556                 :            : }
    1557                 :            : 
    1558                 :            : /* Enqueue all recursive calls from NODE into priority queue depending on
    1559                 :            :    how likely we want to recursively inline the call.  */
    1560                 :            : 
    1561                 :            : static void
    1562                 :      16823 : lookup_recursive_calls (struct cgraph_node *node, struct cgraph_node *where,
    1563                 :            :                         edge_heap_t *heap)
    1564                 :            : {
    1565                 :      16823 :   struct cgraph_edge *e;
    1566                 :      16823 :   enum availability avail;
    1567                 :            : 
    1568                 :      48213 :   for (e = where->callees; e; e = e->next_callee)
    1569                 :      31390 :     if (e->callee == node
    1570                 :      31390 :         || (e->callee->ultimate_alias_target (&avail, e->caller) == node
    1571                 :        970 :             && avail > AVAIL_INTERPOSABLE))
    1572                 :      12093 :       heap->insert (-e->sreal_frequency (), e);
    1573                 :      48213 :   for (e = where->callees; e; e = e->next_callee)
    1574                 :      31390 :     if (!e->inline_failed)
    1575                 :       6188 :       lookup_recursive_calls (node, e->callee, heap);
    1576                 :      16823 : }
    1577                 :            : 
    1578                 :            : /* Decide on recursive inlining: in the case function has recursive calls,
    1579                 :            :    inline until body size reaches given argument.  If any new indirect edges
    1580                 :            :    are discovered in the process, add them to *NEW_EDGES, unless NEW_EDGES
    1581                 :            :    is NULL.  */
    1582                 :            : 
    1583                 :            : static bool
    1584                 :       1445 : recursive_inlining (struct cgraph_edge *edge,
    1585                 :            :                     vec<cgraph_edge *> *new_edges)
    1586                 :            : {
    1587                 :       2890 :   cgraph_node *to  = (edge->caller->inlined_to
    1588                 :       1445 :                       ? edge->caller->inlined_to : edge->caller);
    1589                 :       1445 :   int limit = opt_for_fn (to->decl,
    1590                 :            :                           param_max_inline_insns_recursive_auto);
    1591                 :       2890 :   edge_heap_t heap (sreal::min ());
    1592                 :       1445 :   struct cgraph_node *node;
    1593                 :       1445 :   struct cgraph_edge *e;
    1594                 :       1445 :   struct cgraph_node *master_clone = NULL, *next;
    1595                 :       1445 :   int depth = 0;
    1596                 :       1445 :   int n = 0;
    1597                 :            : 
    1598                 :       1445 :   node = edge->caller;
    1599                 :       1445 :   if (node->inlined_to)
    1600                 :        386 :     node = node->inlined_to;
    1601                 :            : 
    1602                 :       1445 :   if (DECL_DECLARED_INLINE_P (node->decl))
    1603                 :        343 :     limit = opt_for_fn (to->decl, param_max_inline_insns_recursive);
    1604                 :            : 
    1605                 :            :   /* Make sure that function is small enough to be considered for inlining.  */
    1606                 :       1445 :   if (estimate_size_after_inlining (node, edge)  >= limit)
    1607                 :            :     return false;
    1608                 :       1445 :   lookup_recursive_calls (node, node, &heap);
    1609                 :       1445 :   if (heap.empty ())
    1610                 :            :     return false;
    1611                 :            : 
    1612                 :       1445 :   if (dump_file)
    1613                 :          5 :     fprintf (dump_file,
    1614                 :            :              "  Performing recursive inlining on %s\n", node->dump_name ());
    1615                 :            : 
    1616                 :            :   /* Do the inlining and update list of recursive call during process.  */
    1617                 :      13215 :   while (!heap.empty ())
    1618                 :            :     {
    1619                 :      11786 :       struct cgraph_edge *curr = heap.extract_min ();
    1620                 :      11786 :       struct cgraph_node *cnode, *dest = curr->callee;
    1621                 :            : 
    1622                 :      11786 :       if (!can_inline_edge_p (curr, true)
    1623                 :      11786 :           || !can_inline_edge_by_limits_p (curr, true))
    1624                 :          0 :         continue;
    1625                 :            : 
    1626                 :            :       /* MASTER_CLONE is produced in the case we already started modified
    1627                 :            :          the function. Be sure to redirect edge to the original body before
    1628                 :            :          estimating growths otherwise we will be seeing growths after inlining
    1629                 :            :          the already modified body.  */
    1630                 :      11786 :       if (master_clone)
    1631                 :            :         {
    1632                 :      10341 :           curr->redirect_callee (master_clone);
    1633                 :      10341 :           if (edge_growth_cache != NULL)
    1634                 :      10341 :             edge_growth_cache->remove (curr);
    1635                 :            :         }
    1636                 :            : 
    1637                 :      11786 :       if (estimate_size_after_inlining (node, curr) > limit)
    1638                 :            :         {
    1639                 :         16 :           curr->redirect_callee (dest);
    1640                 :         16 :           if (edge_growth_cache != NULL)
    1641                 :         16 :             edge_growth_cache->remove (curr);
    1642                 :            :           break;
    1643                 :            :         }
    1644                 :            : 
    1645                 :      11770 :       depth = 1;
    1646                 :      11770 :       for (cnode = curr->caller;
    1647                 :      64592 :            cnode->inlined_to; cnode = cnode->callers->caller)
    1648                 :     105644 :         if (node->decl
    1649                 :      52822 :             == curr->callee->ultimate_alias_target ()->decl)
    1650                 :      52822 :           depth++;
    1651                 :            : 
    1652                 :      11770 :       if (!want_inline_self_recursive_call_p (curr, node, false, depth))
    1653                 :            :         {
    1654                 :       2580 :           curr->redirect_callee (dest);
    1655                 :       2580 :           if (edge_growth_cache != NULL)
    1656                 :       2580 :             edge_growth_cache->remove (curr);
    1657                 :       2580 :           continue;
    1658                 :            :         }
    1659                 :            : 
    1660                 :       9190 :       if (dump_file)
    1661                 :            :         {
    1662                 :         18 :           fprintf (dump_file,
    1663                 :            :                    "   Inlining call of depth %i", depth);
    1664                 :         36 :           if (node->count.nonzero_p () && curr->count.initialized_p ())
    1665                 :            :             {
    1666                 :          2 :               fprintf (dump_file, " called approx. %.2f times per call",
    1667                 :          2 :                        (double)curr->count.to_gcov_type ()
    1668                 :          2 :                        / node->count.to_gcov_type ());
    1669                 :            :             }
    1670                 :         18 :           fprintf (dump_file, "\n");
    1671                 :            :         }
    1672                 :       9190 :       if (!master_clone)
    1673                 :            :         {
    1674                 :            :           /* We need original clone to copy around.  */
    1675                 :       1246 :           master_clone = node->create_clone (node->decl, node->count,
    1676                 :            :             false, vNULL, true, NULL, NULL);
    1677                 :       3772 :           for (e = master_clone->callees; e; e = e->next_callee)
    1678                 :       2526 :             if (!e->inline_failed)
    1679                 :        461 :               clone_inlined_nodes (e, true, false, NULL);
    1680                 :       1246 :           curr->redirect_callee (master_clone);
    1681                 :       1246 :           if (edge_growth_cache != NULL)
    1682                 :       1246 :             edge_growth_cache->remove (curr);
    1683                 :            :         }
    1684                 :            : 
    1685                 :       9190 :       inline_call (curr, false, new_edges, &overall_size, true);
    1686                 :       9190 :       reset_node_cache (node);
    1687                 :       9190 :       lookup_recursive_calls (node, curr->callee, &heap);
    1688                 :       9190 :       n++;
    1689                 :            :     }
    1690                 :            : 
    1691                 :       1445 :   if (!heap.empty () && dump_file)
    1692                 :          0 :     fprintf (dump_file, "    Recursive inlining growth limit met.\n");
    1693                 :            : 
    1694                 :       1445 :   if (!master_clone)
    1695                 :            :     return false;
    1696                 :            : 
    1697                 :       1246 :   if (dump_enabled_p ())
    1698                 :          5 :     dump_printf_loc (MSG_NOTE, edge->call_stmt,
    1699                 :            :                      "\n   Inlined %i times, "
    1700                 :            :                      "body grown from size %i to %i, time %f to %f\n", n,
    1701                 :          5 :                      ipa_size_summaries->get (master_clone)->size,
    1702                 :          5 :                      ipa_size_summaries->get (node)->size,
    1703                 :         10 :                      ipa_fn_summaries->get (master_clone)->time.to_double (),
    1704                 :         10 :                      ipa_fn_summaries->get (node)->time.to_double ());
    1705                 :            : 
    1706                 :            :   /* Remove master clone we used for inlining.  We rely that clones inlined
    1707                 :            :      into master clone gets queued just before master clone so we don't
    1708                 :            :      need recursion.  */
    1709                 :      16415 :   for (node = symtab->first_function (); node != master_clone;
    1710                 :            :        node = next)
    1711                 :            :     {
    1712                 :      13923 :       next = symtab->next_function (node);
    1713                 :      13923 :       if (node->inlined_to == master_clone)
    1714                 :        720 :         node->remove ();
    1715                 :            :     }
    1716                 :       1246 :   master_clone->remove ();
    1717                 :       1246 :   return true;
    1718                 :            : }
    1719                 :            : 
    1720                 :            : 
    1721                 :            : /* Given whole compilation unit estimate of INSNS, compute how large we can
    1722                 :            :    allow the unit to grow.  */
    1723                 :            : 
    1724                 :            : static int64_t
    1725                 :     467059 : compute_max_insns (cgraph_node *node, int insns)
    1726                 :            : {
    1727                 :     467059 :   int max_insns = insns;
    1728                 :          0 :   if (max_insns < opt_for_fn (node->decl, param_large_unit_insns))
    1729                 :            :     max_insns = opt_for_fn (node->decl, param_large_unit_insns);
    1730                 :            : 
    1731                 :     467059 :   return ((int64_t) max_insns
    1732                 :     467059 :           * (100 + opt_for_fn (node->decl, param_inline_unit_growth)) / 100);
    1733                 :            : }
    1734                 :            : 
    1735                 :            : 
    1736                 :            : /* Compute badness of all edges in NEW_EDGES and add them to the HEAP.  */
    1737                 :            : 
    1738                 :            : static void
    1739                 :     433518 : add_new_edges_to_heap (edge_heap_t *heap, vec<cgraph_edge *> new_edges)
    1740                 :            : {
    1741                 :     435762 :   while (new_edges.length () > 0)
    1742                 :            :     {
    1743                 :       2244 :       struct cgraph_edge *edge = new_edges.pop ();
    1744                 :            : 
    1745                 :       2244 :       gcc_assert (!edge->aux);
    1746                 :       2244 :       gcc_assert (edge->callee);
    1747                 :       2244 :       if (edge->inline_failed
    1748                 :       2244 :           && can_inline_edge_p (edge, true)
    1749                 :        923 :           && want_inline_small_function_p (edge, true)
    1750                 :       2892 :           && can_inline_edge_by_limits_p (edge, true))
    1751                 :        648 :         edge->aux = heap->insert (edge_badness (edge, false), edge);
    1752                 :            :     }
    1753                 :     433518 : }
    1754                 :            : 
    1755                 :            : /* Remove EDGE from the fibheap.  */
    1756                 :            : 
    1757                 :            : static void
    1758                 :       4611 : heap_edge_removal_hook (struct cgraph_edge *e, void *data)
    1759                 :            : {
    1760                 :       4611 :   if (e->aux)
    1761                 :            :     {
    1762                 :         10 :       ((edge_heap_t *)data)->delete_node ((edge_heap_node_t *)e->aux);
    1763                 :         10 :       e->aux = NULL;
    1764                 :            :     }
    1765                 :       4611 : }
    1766                 :            : 
    1767                 :            : /* Return true if speculation of edge E seems useful.
    1768                 :            :    If ANTICIPATE_INLINING is true, be conservative and hope that E
    1769                 :            :    may get inlined.  */
    1770                 :            : 
    1771                 :            : bool
    1772                 :      32406 : speculation_useful_p (struct cgraph_edge *e, bool anticipate_inlining)
    1773                 :            : {
    1774                 :            :   /* If we have already decided to inline the edge, it seems useful.  */
    1775                 :      32406 :   if (!e->inline_failed)
    1776                 :            :     return true;
    1777                 :            : 
    1778                 :       5696 :   enum availability avail;
    1779                 :      11392 :   struct cgraph_node *target = e->callee->ultimate_alias_target (&avail,
    1780                 :       5696 :                                                                  e->caller);
    1781                 :            : 
    1782                 :       5696 :   gcc_assert (e->speculative && !e->indirect_unknown_callee);
    1783                 :            : 
    1784                 :       5696 :   if (!e->maybe_hot_p ())
    1785                 :            :     return false;
    1786                 :            : 
    1787                 :            :   /* See if IP optimizations found something potentially useful about the
    1788                 :            :      function.  For now we look only for CONST/PURE flags.  Almost everything
    1789                 :            :      else we propagate is useless.  */
    1790                 :       5690 :   if (avail >= AVAIL_AVAILABLE)
    1791                 :            :     {
    1792                 :       5688 :       int ecf_flags = flags_from_decl_or_type (target->decl);
    1793                 :       5688 :       if (ecf_flags & ECF_CONST)
    1794                 :            :         {
    1795                 :         95 :           if (!(e->speculative_call_indirect_edge ()->indirect_info
    1796                 :         95 :                 ->ecf_flags & ECF_CONST))
    1797                 :            :             return true;
    1798                 :            :         }
    1799                 :       5593 :       else if (ecf_flags & ECF_PURE)
    1800                 :            :         {
    1801                 :       1959 :           if (!(e->speculative_call_indirect_edge ()->indirect_info
    1802                 :       1959 :                 ->ecf_flags & ECF_PURE))
    1803                 :            :             return true;
    1804                 :            :         }
    1805                 :            :     }
    1806                 :            :   /* If we did not managed to inline the function nor redirect
    1807                 :            :      to an ipa-cp clone (that are seen by having local flag set),
    1808                 :            :      it is probably pointless to inline it unless hardware is missing
    1809                 :            :      indirect call predictor.  */
    1810                 :       3636 :   if (!anticipate_inlining && !target->local)
    1811                 :            :     return false;
    1812                 :            :   /* For overwritable targets there is not much to do.  */
    1813                 :       2970 :   if (!can_inline_edge_p (e, false)
    1814                 :       2970 :       || !can_inline_edge_by_limits_p (e, false, true))
    1815                 :          5 :     return false;
    1816                 :            :   /* OK, speculation seems interesting.  */
    1817                 :            :   return true;
    1818                 :            : }
    1819                 :            : 
    1820                 :            : /* We know that EDGE is not going to be inlined.
    1821                 :            :    See if we can remove speculation.  */
    1822                 :            : 
    1823                 :            : static void
    1824                 :      33800 : resolve_noninline_speculation (edge_heap_t *edge_heap, struct cgraph_edge *edge)
    1825                 :            : {
    1826                 :      33800 :   if (edge->speculative && !speculation_useful_p (edge, false))
    1827                 :            :     {
    1828                 :         99 :       struct cgraph_node *node = edge->caller;
    1829                 :        198 :       struct cgraph_node *where = node->inlined_to
    1830                 :         99 :                                   ? node->inlined_to : node;
    1831                 :        198 :       auto_bitmap updated_nodes;
    1832                 :            : 
    1833                 :         99 :       if (edge->count.ipa ().initialized_p ())
    1834                 :          0 :         spec_rem += edge->count.ipa ();
    1835                 :         99 :       cgraph_edge::resolve_speculation (edge);
    1836                 :         99 :       reset_edge_caches (where);
    1837                 :         99 :       ipa_update_overall_fn_summary (where);
    1838                 :         99 :       update_caller_keys (edge_heap, where,
    1839                 :            :                           updated_nodes, NULL);
    1840                 :         99 :       update_callee_keys (edge_heap, where, NULL,
    1841                 :            :                           updated_nodes);
    1842                 :            :     }
    1843                 :      33800 : }
    1844                 :            : 
    1845                 :            : /* Return true if NODE should be accounted for overall size estimate.
    1846                 :            :    Skip all nodes optimized for size so we can measure the growth of hot
    1847                 :            :    part of program no matter of the padding.  */
    1848                 :            : 
    1849                 :            : bool
    1850                 :    2220890 : inline_account_function_p (struct cgraph_node *node)
    1851                 :            : {
    1852                 :    2220890 :    return (!DECL_EXTERNAL (node->decl)
    1853                 :    2091890 :            && !opt_for_fn (node->decl, optimize_size)
    1854                 :    4253300 :            && node->frequency != NODE_FREQUENCY_UNLIKELY_EXECUTED);
    1855                 :            : }
    1856                 :            : 
    1857                 :            : /* Count number of callers of NODE and store it into DATA (that
    1858                 :            :    points to int.  Worker for cgraph_for_node_and_aliases.  */
    1859                 :            : 
    1860                 :            : static bool
    1861                 :     942719 : sum_callers (struct cgraph_node *node, void *data)
    1862                 :            : {
    1863                 :     942719 :   struct cgraph_edge *e;
    1864                 :     942719 :   int *num_calls = (int *)data;
    1865                 :            : 
    1866                 :    2075770 :   for (e = node->callers; e; e = e->next_caller)
    1867                 :    1133060 :     (*num_calls)++;
    1868                 :     942719 :   return false;
    1869                 :            : }
    1870                 :            : 
    1871                 :            : /* We only propagate across edges with non-interposable callee.  */
    1872                 :            : 
    1873                 :            : inline bool
    1874                 :    4364660 : ignore_edge_p (struct cgraph_edge *e)
    1875                 :            : {
    1876                 :    4364660 :   enum availability avail;
    1877                 :    4364660 :   e->callee->function_or_virtual_thunk_symbol (&avail, e->caller);
    1878                 :    4364660 :   return (avail <= AVAIL_INTERPOSABLE);
    1879                 :            : }
    1880                 :            : 
    1881                 :            : /* We use greedy algorithm for inlining of small functions:
    1882                 :            :    All inline candidates are put into prioritized heap ordered in
    1883                 :            :    increasing badness.
    1884                 :            : 
    1885                 :            :    The inlining of small functions is bounded by unit growth parameters.  */
    1886                 :            : 
    1887                 :            : static void
    1888                 :     163701 : inline_small_functions (void)
    1889                 :            : {
    1890                 :     163701 :   struct cgraph_node *node;
    1891                 :     163701 :   struct cgraph_edge *edge;
    1892                 :     163701 :   edge_heap_t edge_heap (sreal::min ());
    1893                 :     327402 :   auto_bitmap updated_nodes;
    1894                 :     163701 :   int min_size;
    1895                 :     327402 :   auto_vec<cgraph_edge *> new_indirect_edges;
    1896                 :     163701 :   int initial_size = 0;
    1897                 :     163701 :   struct cgraph_node **order = XCNEWVEC (cgraph_node *, symtab->cgraph_count);
    1898                 :     163701 :   struct cgraph_edge_hook_list *edge_removal_hook_holder;
    1899                 :     163701 :   new_indirect_edges.create (8);
    1900                 :            : 
    1901                 :     163701 :   edge_removal_hook_holder
    1902                 :     163701 :     = symtab->add_edge_removal_hook (&heap_edge_removal_hook, &edge_heap);
    1903                 :            : 
    1904                 :            :   /* Compute overall unit size and other global parameters used by badness
    1905                 :            :      metrics.  */
    1906                 :            : 
    1907                 :     163701 :   max_count = profile_count::uninitialized ();
    1908                 :     163701 :   ipa_reduced_postorder (order, true, ignore_edge_p);
    1909                 :     163701 :   free (order);
    1910                 :            : 
    1911                 :    2727480 :   FOR_EACH_DEFINED_FUNCTION (node)
    1912                 :    1200040 :     if (!node->inlined_to)
    1913                 :            :       {
    1914                 :    1200010 :         if (!node->alias && node->analyzed
    1915                 :       3181 :             && (node->has_gimple_body_p () || node->thunk.thunk_p)
    1916                 :    2326110 :             && opt_for_fn (node->decl, optimize))
    1917                 :            :           {
    1918                 :     868501 :             class ipa_fn_summary *info = ipa_fn_summaries->get (node);
    1919                 :     868501 :             struct ipa_dfs_info *dfs = (struct ipa_dfs_info *) node->aux;
    1920                 :            : 
    1921                 :            :             /* Do not account external functions, they will be optimized out
    1922                 :            :                if not inlined.  Also only count the non-cold portion of program.  */
    1923                 :     868501 :             if (inline_account_function_p (node))
    1924                 :     796609 :               initial_size += ipa_size_summaries->get (node)->size;
    1925                 :     868501 :             info->growth = estimate_growth (node);
    1926                 :            : 
    1927                 :     868501 :             int num_calls = 0;
    1928                 :     868501 :             node->call_for_symbol_and_aliases (sum_callers, &num_calls,
    1929                 :            :                                                true);
    1930                 :     868501 :             if (num_calls == 1)
    1931                 :     309931 :               info->single_caller = true;
    1932                 :     868501 :             if (dfs && dfs->next_cycle)
    1933                 :            :               {
    1934                 :       4639 :                 struct cgraph_node *n2;
    1935                 :       4639 :                 int id = dfs->scc_no + 1;
    1936                 :      10385 :                 for (n2 = node; n2;
    1937                 :       5746 :                      n2 = ((struct ipa_dfs_info *) n2->aux)->next_cycle)
    1938                 :       9278 :                   if (opt_for_fn (n2->decl, optimize))
    1939                 :            :                     {
    1940                 :       9273 :                       ipa_fn_summary *info2 = ipa_fn_summaries->get
    1941                 :       9273 :                          (n2->inlined_to ? n2->inlined_to : n2);
    1942                 :       9273 :                       if (info2->scc_no)
    1943                 :            :                         break;
    1944                 :       5741 :                       info2->scc_no = id;
    1945                 :            :                     }
    1946                 :            :               }
    1947                 :            :           }
    1948                 :            : 
    1949                 :    2579110 :         for (edge = node->callers; edge; edge = edge->next_caller)
    1950                 :    1379100 :           max_count = max_count.max (edge->count.ipa ());
    1951                 :            :       }
    1952                 :     163701 :   ipa_free_postorder_info ();
    1953                 :     163701 :   initialize_growth_caches ();
    1954                 :            : 
    1955                 :     163701 :   if (dump_file)
    1956                 :        192 :     fprintf (dump_file,
    1957                 :            :              "\nDeciding on inlining of small functions.  Starting with size %i.\n",
    1958                 :            :              initial_size);
    1959                 :            : 
    1960                 :     163701 :   overall_size = initial_size;
    1961                 :     163701 :   min_size = overall_size;
    1962                 :            : 
    1963                 :            :   /* Populate the heap with all edges we might inline.  */
    1964                 :            : 
    1965                 :    2727480 :   FOR_EACH_DEFINED_FUNCTION (node)
    1966                 :            :     {
    1967                 :    1200040 :       bool update = false;
    1968                 :    1200040 :       struct cgraph_edge *next = NULL;
    1969                 :    1200040 :       bool has_speculative = false;
    1970                 :            : 
    1971                 :    1200040 :       if (!opt_for_fn (node->decl, optimize))
    1972                 :     281146 :         continue;
    1973                 :            : 
    1974                 :     918893 :       if (dump_file)
    1975                 :       1039 :         fprintf (dump_file, "Enqueueing calls in %s.\n", node->dump_name ());
    1976                 :            : 
    1977                 :    4425830 :       for (edge = node->callees; edge; edge = edge->next_callee)
    1978                 :            :         {
    1979                 :    3506940 :           if (edge->inline_failed
    1980                 :    3506910 :               && !edge->aux
    1981                 :    3506810 :               && can_inline_edge_p (edge, true)
    1982                 :     801259 :               && want_inline_small_function_p (edge, true)
    1983                 :     450964 :               && can_inline_edge_by_limits_p (edge, true)
    1984                 :    3955040 :               && edge->inline_failed)
    1985                 :            :             {
    1986                 :     448105 :               gcc_assert (!edge->aux);
    1987                 :     448105 :               update_edge_key (&edge_heap, edge);
    1988                 :            :             }
    1989                 :    3506940 :           if (edge->speculative)
    1990                 :       4621 :             has_speculative = true;
    1991                 :            :         }
    1992                 :     918893 :       if (has_speculative)
    1993                 :      32166 :         for (edge = node->callees; edge; edge = next)
    1994                 :            :           {
    1995                 :      28664 :             next = edge->next_callee;
    1996                 :      28664 :             if (edge->speculative
    1997                 :      28664 :                 && !speculation_useful_p (edge, edge->aux != NULL))
    1998                 :            :               {
    1999                 :        455 :                 cgraph_edge::resolve_speculation (edge);
    2000                 :        455 :                 update = true;
    2001                 :            :               }
    2002                 :            :           }
    2003                 :       3502 :       if (update)
    2004                 :            :         {
    2005                 :        634 :           struct cgraph_node *where = node->inlined_to
    2006                 :        317 :                                       ? node->inlined_to : node;
    2007                 :        317 :           ipa_update_overall_fn_summary (where);
    2008                 :        317 :           reset_edge_caches (where);
    2009                 :        317 :           update_caller_keys (&edge_heap, where,
    2010                 :            :                               updated_nodes, NULL);
    2011                 :        317 :           update_callee_keys (&edge_heap, where, NULL,
    2012                 :            :                               updated_nodes);
    2013                 :        317 :           bitmap_clear (updated_nodes);
    2014                 :            :         }
    2015                 :            :     }
    2016                 :            : 
    2017                 :     163701 :   gcc_assert (in_lto_p
    2018                 :            :               || !(max_count > 0)
    2019                 :            :               || (profile_info && flag_branch_probabilities));
    2020                 :            : 
    2021                 :    1216430 :   while (!edge_heap.empty ())
    2022                 :            :     {
    2023                 :    1052730 :       int old_size = overall_size;
    2024                 :    1052730 :       struct cgraph_node *where, *callee;
    2025                 :    1052730 :       sreal badness = edge_heap.min_key ();
    2026                 :    1052730 :       sreal current_badness;
    2027                 :    1052730 :       int growth;
    2028                 :            : 
    2029                 :    1052730 :       edge = edge_heap.extract_min ();
    2030                 :    1052730 :       gcc_assert (edge->aux);
    2031                 :    1052730 :       edge->aux = NULL;
    2032                 :    1052730 :       if (!edge->inline_failed || !edge->callee->analyzed)
    2033                 :     619169 :         continue;
    2034                 :            : 
    2035                 :            :       /* Be sure that caches are maintained consistent.
    2036                 :            :          This check is affected by scaling roundoff errors when compiling for
    2037                 :            :          IPA this we skip it in that case.  */
    2038                 :    1052670 :       if (flag_checking && !edge->callee->count.ipa_p ()
    2039                 :    2604080 :           && (!max_count.initialized_p () || !max_count.nonzero_p ()))
    2040                 :            :         {
    2041                 :     951916 :           sreal cached_badness = edge_badness (edge, false);
    2042                 :            :      
    2043                 :     951916 :           int old_size_est = estimate_edge_size (edge);
    2044                 :     951916 :           sreal old_time_est = estimate_edge_time (edge);
    2045                 :     951916 :           int old_hints_est = estimate_edge_hints (edge);
    2046                 :            : 
    2047                 :     951916 :           if (edge_growth_cache != NULL)
    2048                 :     951916 :             edge_growth_cache->remove (edge);
    2049                 :    1692440 :           reset_node_cache (edge->caller->inlined_to
    2050                 :            :                             ? edge->caller->inlined_to
    2051                 :            :                             : edge->caller);
    2052                 :     951916 :           gcc_assert (old_size_est == estimate_edge_size (edge));
    2053                 :     951916 :           gcc_assert (old_time_est == estimate_edge_time (edge));
    2054                 :            :           /* FIXME:
    2055                 :            : 
    2056                 :            :              gcc_assert (old_hints_est == estimate_edge_hints (edge));
    2057                 :            : 
    2058                 :            :              fails with profile feedback because some hints depends on
    2059                 :            :              maybe_hot_edge_p predicate and because callee gets inlined to other
    2060                 :            :              calls, the edge may become cold.
    2061                 :            :              This ought to be fixed by computing relative probabilities
    2062                 :            :              for given invocation but that will be better done once whole
    2063                 :            :              code is converted to sreals.  Disable for now and revert to "wrong"
    2064                 :            :              value so enable/disable checking paths agree.  */
    2065                 :     951916 :           edge_growth_cache->get (edge)->hints = old_hints_est + 1;
    2066                 :            : 
    2067                 :            :           /* When updating the edge costs, we only decrease badness in the keys.
    2068                 :            :              Increases of badness are handled lazily; when we see key with out
    2069                 :            :              of date value on it, we re-insert it now.  */
    2070                 :     951916 :           current_badness = edge_badness (edge, false);
    2071                 :     951916 :           gcc_assert (cached_badness == current_badness);
    2072                 :     951916 :           gcc_assert (current_badness >= badness);
    2073                 :            :         }
    2074                 :            :       else
    2075                 :     100757 :         current_badness = edge_badness (edge, false);
    2076                 :    1052670 :       if (current_badness != badness)
    2077                 :            :         {
    2078                 :     664171 :           if (edge_heap.min () && current_badness > edge_heap.min_key ())
    2079                 :            :             {
    2080                 :     585312 :               edge->aux = edge_heap.insert (current_badness, edge);
    2081                 :     585312 :               continue;
    2082                 :            :             }
    2083                 :            :           else
    2084                 :      78859 :             badness = current_badness;
    2085                 :            :         }
    2086                 :            : 
    2087                 :     467361 :       if (!can_inline_edge_p (edge, true)
    2088                 :     467361 :           || !can_inline_edge_by_limits_p (edge, true))
    2089                 :            :         {
    2090                 :        302 :           resolve_noninline_speculation (&edge_heap, edge);
    2091                 :        302 :           continue;
    2092                 :            :         }
    2093                 :            :       
    2094                 :     467059 :       callee = edge->callee->ultimate_alias_target ();
    2095                 :     467059 :       growth = estimate_edge_growth (edge);
    2096                 :     467059 :       if (dump_file)
    2097                 :            :         {
    2098                 :        557 :           fprintf (dump_file,
    2099                 :            :                    "\nConsidering %s with %i size\n",
    2100                 :            :                    callee->dump_name (),
    2101                 :        557 :                    ipa_size_summaries->get (callee)->size);
    2102                 :       1114 :           fprintf (dump_file,
    2103                 :            :                    " to be inlined into %s in %s:%i\n"
    2104                 :            :                    " Estimated badness is %f, frequency %.2f.\n",
    2105                 :        557 :                    edge->caller->dump_name (),
    2106                 :        557 :                    edge->call_stmt
    2107                 :        530 :                    && (LOCATION_LOCUS (gimple_location ((const gimple *)
    2108                 :            :                                                         edge->call_stmt))
    2109                 :            :                        > BUILTINS_LOCATION)
    2110                 :        521 :                    ? gimple_filename ((const gimple *) edge->call_stmt)
    2111                 :            :                    : "unknown",
    2112                 :        557 :                    edge->call_stmt
    2113                 :        530 :                    ? gimple_lineno ((const gimple *) edge->call_stmt)
    2114                 :            :                    : -1,
    2115                 :            :                    badness.to_double (),
    2116                 :        557 :                    edge->sreal_frequency ().to_double ());
    2117                 :        557 :           if (edge->count.ipa ().initialized_p ())
    2118                 :            :             {
    2119                 :          0 :               fprintf (dump_file, " Called ");
    2120                 :          0 :               edge->count.ipa ().dump (dump_file);
    2121                 :          0 :               fprintf (dump_file, " times\n");
    2122                 :            :             }
    2123                 :        557 :           if (dump_flags & TDF_DETAILS)
    2124                 :        194 :             edge_badness (edge, true);
    2125                 :            :         }
    2126                 :            : 
    2127                 :     467059 :       where = edge->caller;
    2128                 :            : 
    2129                 :     467059 :       if (overall_size + growth > compute_max_insns (where, min_size)
    2130                 :     467059 :           && !DECL_DISREGARD_INLINE_LIMITS (callee->decl))
    2131                 :            :         {
    2132                 :      30701 :           edge->inline_failed = CIF_INLINE_UNIT_GROWTH_LIMIT;
    2133                 :      30701 :           report_inline_failed_reason (edge);
    2134                 :      30701 :           resolve_noninline_speculation (&edge_heap, edge);
    2135                 :      30701 :           continue;
    2136                 :            :         }
    2137                 :            : 
    2138                 :     436358 :       if (!want_inline_small_function_p (edge, true))
    2139                 :            :         {
    2140                 :       1299 :           resolve_noninline_speculation (&edge_heap, edge);
    2141                 :       1299 :           continue;
    2142                 :            :         }
    2143                 :            : 
    2144                 :     435059 :       profile_count old_count = callee->count;
    2145                 :            : 
    2146                 :            :       /* Heuristics for inlining small functions work poorly for
    2147                 :            :          recursive calls where we do effects similar to loop unrolling.
    2148                 :            :          When inlining such edge seems profitable, leave decision on
    2149                 :            :          specific inliner.  */
    2150                 :     435059 :       if (edge->recursive_p ())
    2151                 :            :         {
    2152                 :       1445 :           if (where->inlined_to)
    2153                 :        386 :             where = where->inlined_to;
    2154                 :       1445 :           if (!recursive_inlining (edge,
    2155                 :       1445 :                                    opt_for_fn (edge->caller->decl,
    2156                 :            :                                                flag_indirect_inlining)
    2157                 :            :                                    ? &new_indirect_edges : NULL))
    2158                 :            :             {
    2159                 :        199 :               edge->inline_failed = CIF_RECURSIVE_INLINING;
    2160                 :        199 :               resolve_noninline_speculation (&edge_heap, edge);
    2161                 :        199 :               continue;
    2162                 :            :             }
    2163                 :       1246 :           reset_edge_caches (where);
    2164                 :            :           /* Recursive inliner inlines all recursive calls of the function
    2165                 :            :              at once. Consequently we need to update all callee keys.  */
    2166                 :       1246 :           if (opt_for_fn (edge->caller->decl, flag_indirect_inlining))
    2167                 :       1203 :             add_new_edges_to_heap (&edge_heap, new_indirect_edges);
    2168                 :       1246 :           update_callee_keys (&edge_heap, where, where, updated_nodes);
    2169                 :       1246 :           bitmap_clear (updated_nodes);
    2170                 :            :         }
    2171                 :            :       else
    2172                 :            :         {
    2173                 :     433614 :           struct cgraph_node *outer_node = NULL;
    2174                 :     433614 :           int depth = 0;
    2175                 :            : 
    2176                 :            :           /* Consider the case where self recursive function A is inlined
    2177                 :            :              into B.  This is desired optimization in some cases, since it
    2178                 :            :              leads to effect similar of loop peeling and we might completely
    2179                 :            :              optimize out the recursive call.  However we must be extra
    2180                 :            :              selective.  */
    2181                 :            : 
    2182                 :     433614 :           where = edge->caller;
    2183                 :     669499 :           while (where->inlined_to)
    2184                 :            :             {
    2185                 :     235885 :               if (where->decl == callee->decl)
    2186                 :       6728 :                 outer_node = where, depth++;
    2187                 :     235885 :               where = where->callers->caller;
    2188                 :            :             }
    2189                 :     434913 :           if (outer_node
    2190                 :     433614 :               && !want_inline_self_recursive_call_p (edge, outer_node,
    2191                 :            :                                                      true, depth))
    2192                 :            :             {
    2193                 :       1299 :               edge->inline_failed
    2194                 :       1299 :                 = (DECL_DISREGARD_INLINE_LIMITS (edge->callee->decl)
    2195                 :       1299 :                    ? CIF_RECURSIVE_INLINING : CIF_UNSPECIFIED);
    2196                 :       1299 :               resolve_noninline_speculation (&edge_heap, edge);
    2197                 :       1299 :               continue;
    2198                 :            :             }
    2199                 :     432315 :           else if (depth && dump_file)
    2200                 :          6 :             fprintf (dump_file, " Peeling recursion with depth %i\n", depth);
    2201                 :            : 
    2202                 :     432315 :           gcc_checking_assert (!callee->inlined_to);
    2203                 :            : 
    2204                 :     432315 :           int old_size = ipa_size_summaries->get (where)->size;
    2205                 :     432315 :           sreal old_time = ipa_fn_summaries->get (where)->time;
    2206                 :            : 
    2207                 :     432315 :           inline_call (edge, true, &new_indirect_edges, &overall_size, true);
    2208                 :     432315 :           reset_edge_caches (edge->callee);
    2209                 :     432315 :           add_new_edges_to_heap (&edge_heap, new_indirect_edges);
    2210                 :            : 
    2211                 :            :           /* If caller's size and time increased we do not need to update
    2212                 :            :              all edges because badness is not going to decrease.  */
    2213                 :     432315 :           if (old_size <= ipa_size_summaries->get (where)->size
    2214                 :     821218 :               && old_time <= ipa_fn_summaries->get (where)->time
    2215                 :            :               /* Wrapper penalty may be non-monotonous in this respect.
    2216                 :            :                  Fortunately it only affects small functions.  */
    2217                 :     712600 :               && !wrapper_heuristics_may_apply (where, old_size))
    2218                 :     203284 :             update_callee_keys (&edge_heap, edge->callee, edge->callee,
    2219                 :            :                                 updated_nodes);
    2220                 :            :           else
    2221                 :     229031 :             update_callee_keys (&edge_heap, where,
    2222                 :            :                                 edge->callee,
    2223                 :            :                                 updated_nodes);
    2224                 :            :         }
    2225                 :     433561 :       where = edge->caller;
    2226                 :     433561 :       if (where->inlined_to)
    2227                 :     103683 :         where = where->inlined_to;
    2228                 :            : 
    2229                 :            :       /* Our profitability metric can depend on local properties
    2230                 :            :          such as number of inlinable calls and size of the function body.
    2231                 :            :          After inlining these properties might change for the function we
    2232                 :            :          inlined into (since it's body size changed) and for the functions
    2233                 :            :          called by function we inlined (since number of it inlinable callers
    2234                 :            :          might change).  */
    2235                 :     433561 :       update_caller_keys (&edge_heap, where, updated_nodes, NULL);
    2236                 :            :       /* Offline copy count has possibly changed, recompute if profile is
    2237                 :            :          available.  */
    2238                 :     433561 :       struct cgraph_node *n
    2239                 :     433561 :               = cgraph_node::get (edge->callee->decl)->ultimate_alias_target ();
    2240                 :     562923 :       if (n != edge->callee && n->analyzed && !(n->count == old_count)
    2241                 :     433590 :           && n->count.ipa_p ())
    2242                 :         29 :         update_callee_keys (&edge_heap, n, NULL, updated_nodes);
    2243                 :     433561 :       bitmap_clear (updated_nodes);
    2244                 :            : 
    2245                 :     433561 :       if (dump_enabled_p ())
    2246                 :            :         {
    2247                 :        587 :           ipa_fn_summary *s = ipa_fn_summaries->get (where);
    2248                 :            : 
    2249                 :            :           /* dump_printf can't handle %+i.  */
    2250                 :        587 :           char buf_net_change[100];
    2251                 :        587 :           snprintf (buf_net_change, sizeof buf_net_change, "%+i",
    2252                 :            :                     overall_size - old_size);
    2253                 :            : 
    2254                 :       1174 :           dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, edge->call_stmt,
    2255                 :            :                            " Inlined %C into %C which now has time %f and "
    2256                 :            :                            "size %i, net change of %s%s.\n",
    2257                 :            :                            edge->callee, edge->caller,
    2258                 :            :                            s->time.to_double (),
    2259                 :        587 :                            ipa_size_summaries->get (edge->caller)->size,
    2260                 :            :                            buf_net_change,
    2261                 :        587 :                            cross_module_call_p (edge) ? " (cross module)":"");
    2262                 :            :         }
    2263                 :     433561 :       if (min_size > overall_size)
    2264                 :            :         {
    2265                 :     110661 :           min_size = overall_size;
    2266                 :            : 
    2267                 :     110661 :           if (dump_file)
    2268                 :        436 :             fprintf (dump_file, "New minimal size reached: %i\n", min_size);
    2269                 :            :         }
    2270                 :            :     }
    2271                 :            : 
    2272                 :     163701 :   free_growth_caches ();
    2273                 :     163701 :   if (dump_enabled_p ())
    2274                 :        439 :     dump_printf (MSG_NOTE,
    2275                 :            :                  "Unit growth for small function inlining: %i->%i (%i%%)\n",
    2276                 :            :                  initial_size, overall_size,
    2277                 :        196 :                  initial_size ? overall_size * 100 / (initial_size) - 100: 0);
    2278                 :     163701 :   symtab->remove_edge_removal_hook (edge_removal_hook_holder);
    2279                 :     163701 : }
    2280                 :            : 
    2281                 :            : /* Flatten NODE.  Performed both during early inlining and
    2282                 :            :    at IPA inlining time.  */
    2283                 :            : 
    2284                 :            : static void
    2285                 :        226 : flatten_function (struct cgraph_node *node, bool early, bool update)
    2286                 :            : {
    2287                 :        226 :   struct cgraph_edge *e;
    2288                 :            : 
    2289                 :            :   /* We shouldn't be called recursively when we are being processed.  */
    2290                 :        226 :   gcc_assert (node->aux == NULL);
    2291                 :            : 
    2292                 :        226 :   node->aux = (void *) node;
    2293                 :            : 
    2294                 :        720 :   for (e = node->callees; e; e = e->next_callee)
    2295                 :            :     {
    2296                 :        494 :       struct cgraph_node *orig_callee;
    2297                 :        494 :       struct cgraph_node *callee = e->callee->ultimate_alias_target ();
    2298                 :            : 
    2299                 :            :       /* We've hit cycle?  It is time to give up.  */
    2300                 :        494 :       if (callee->aux)
    2301                 :            :         {
    2302                 :         15 :           if (dump_enabled_p ())
    2303                 :          0 :             dump_printf_loc (MSG_MISSED_OPTIMIZATION, e->call_stmt,
    2304                 :            :                              "Not inlining %C into %C to avoid cycle.\n",
    2305                 :            :                              callee, e->caller);
    2306                 :         15 :           if (cgraph_inline_failed_type (e->inline_failed) != CIF_FINAL_ERROR)
    2307                 :         15 :             e->inline_failed = CIF_RECURSIVE_INLINING;
    2308                 :         15 :           continue;
    2309                 :            :         }
    2310                 :            : 
    2311                 :            :       /* When the edge is already inlined, we just need to recurse into
    2312                 :            :          it in order to fully flatten the leaves.  */
    2313                 :        479 :       if (!e->inline_failed)
    2314                 :            :         {
    2315                 :          0 :           flatten_function (callee, early, false);
    2316                 :          0 :           continue;
    2317                 :            :         }
    2318                 :            : 
    2319                 :            :       /* Flatten attribute needs to be processed during late inlining. For
    2320                 :            :          extra code quality we however do flattening during early optimization,
    2321                 :            :          too.  */
    2322                 :        264 :       if (!early
    2323                 :        479 :           ? !can_inline_edge_p (e, true)
    2324                 :        215 :             && !can_inline_edge_by_limits_p (e, true)
    2325                 :        264 :           : !can_early_inline_edge_p (e))
    2326                 :        357 :         continue;
    2327                 :            : 
    2328                 :        122 :       if (e->recursive_p ())
    2329                 :            :         {
    2330                 :          0 :           if (dump_enabled_p ())
    2331                 :          0 :             dump_printf_loc (MSG_MISSED_OPTIMIZATION, e->call_stmt,
    2332                 :            :                              "Not inlining: recursive call.\n");
    2333                 :          0 :           continue;
    2334                 :            :         }
    2335                 :            : 
    2336                 :        122 :       if (gimple_in_ssa_p (DECL_STRUCT_FUNCTION (node->decl))
    2337                 :        244 :           != gimple_in_ssa_p (DECL_STRUCT_FUNCTION (callee->decl)))
    2338                 :            :         {
    2339                 :          4 :           if (dump_enabled_p ())
    2340                 :          4 :             dump_printf_loc (MSG_MISSED_OPTIMIZATION, e->call_stmt,
    2341                 :            :                              "Not inlining: SSA form does not match.\n");
    2342                 :          4 :           continue;
    2343                 :            :         }
    2344                 :            : 
    2345                 :            :       /* Inline the edge and flatten the inline clone.  Avoid
    2346                 :            :          recursing through the original node if the node was cloned.  */
    2347                 :        118 :       if (dump_enabled_p ())
    2348                 :          3 :         dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, e->call_stmt,
    2349                 :            :                          " Inlining %C into %C.\n",
    2350                 :            :                          callee, e->caller);
    2351                 :        118 :       orig_callee = callee;
    2352                 :        118 :       inline_call (e, true, NULL, NULL, false);
    2353                 :        118 :       if (e->callee != orig_callee)
    2354                 :         85 :         orig_callee->aux = (void *) node;
    2355                 :        118 :       flatten_function (e->callee, early, false);
    2356                 :        118 :       if (e->callee != orig_callee)
    2357                 :         85 :         orig_callee->aux = NULL;
    2358                 :            :     }
    2359                 :            : 
    2360                 :        226 :   node->aux = NULL;
    2361                 :        226 :   cgraph_node *where = node->inlined_to ? node->inlined_to : node;
    2362                 :        226 :   if (update && opt_for_fn (where->decl, optimize))
    2363                 :        105 :     ipa_update_overall_fn_summary (where);
    2364                 :        226 : }
    2365                 :            : 
    2366                 :            : /* Inline NODE to all callers.  Worker for cgraph_for_node_and_aliases.
    2367                 :            :    DATA points to number of calls originally found so we avoid infinite
    2368                 :            :    recursion.  */
    2369                 :            : 
    2370                 :            : static bool
    2371                 :      24449 : inline_to_all_callers_1 (struct cgraph_node *node, void *data,
    2372                 :            :                          hash_set<cgraph_node *> *callers)
    2373                 :            : {
    2374                 :      24449 :   int *num_calls = (int *)data;
    2375                 :      24449 :   bool callee_removed = false;
    2376                 :            : 
    2377                 :      49448 :   while (node->callers && !node->inlined_to)
    2378                 :            :     {
    2379                 :      25570 :       struct cgraph_node *caller = node->callers->caller;
    2380                 :            : 
    2381                 :      25570 :       if (!can_inline_edge_p (node->callers, true)
    2382                 :      25570 :           || !can_inline_edge_by_limits_p (node->callers, true)
    2383                 :      51140 :           || node->callers->recursive_p ())
    2384                 :            :         {
    2385                 :          0 :           if (dump_file)
    2386                 :          0 :             fprintf (dump_file, "Uninlinable call found; giving up.\n");
    2387                 :          0 :           *num_calls = 0;
    2388                 :          0 :           return false;
    2389                 :            :         }
    2390                 :            : 
    2391                 :      25570 :       if (dump_file)
    2392                 :            :         {
    2393                 :          5 :           cgraph_node *ultimate = node->ultimate_alias_target ();
    2394                 :          5 :           fprintf (dump_file,
    2395                 :            :                    "\nInlining %s size %i.\n",
    2396                 :            :                    ultimate->dump_name (),
    2397                 :          5 :                    ipa_size_summaries->get (ultimate)->size);
    2398                 :          5 :           fprintf (dump_file,
    2399                 :            :                    " Called once from %s %i insns.\n",
    2400                 :            :                    node->callers->caller->dump_name (),
    2401                 :          5 :                    ipa_size_summaries->get (node->callers->caller)->size);
    2402                 :            :         }
    2403                 :            : 
    2404                 :            :       /* Remember which callers we inlined to, delaying updating the
    2405                 :            :          overall summary.  */
    2406                 :      25570 :       callers->add (node->callers->caller);
    2407                 :      25570 :       inline_call (node->callers, true, NULL, NULL, false, &callee_removed);
    2408                 :      25570 :       if (dump_file)
    2409                 :          5 :         fprintf (dump_file,
    2410                 :            :                  " Inlined into %s which now has %i size\n",
    2411                 :            :                  caller->dump_name (),
    2412                 :          5 :                  ipa_size_summaries->get (caller)->size);
    2413                 :      25570 :       if (!(*num_calls)--)
    2414                 :            :         {
    2415                 :          0 :           if (dump_file)
    2416                 :          0 :             fprintf (dump_file, "New calls found; giving up.\n");
    2417                 :          0 :           return callee_removed;
    2418                 :            :         }
    2419                 :      25570 :       if (callee_removed)
    2420                 :            :         return true;
    2421                 :            :     }
    2422                 :            :   return false;
    2423                 :            : }
    2424                 :            : 
    2425                 :            : /* Wrapper around inline_to_all_callers_1 doing delayed overall summary
    2426                 :            :    update.  */
    2427                 :            : 
    2428                 :            : static bool
    2429                 :      24449 : inline_to_all_callers (struct cgraph_node *node, void *data)
    2430                 :            : {
    2431                 :      24449 :   hash_set<cgraph_node *> callers;
    2432                 :      24449 :   bool res = inline_to_all_callers_1 (node, data, &callers);
    2433                 :            :   /* Perform the delayed update of the overall summary of all callers
    2434                 :            :      processed.  This avoids quadratic behavior in the cases where
    2435                 :            :      we have a lot of calls to the same function.  */
    2436                 :      94932 :   for (hash_set<cgraph_node *>::iterator i = callers.begin ();
    2437                 :      47466 :        i != callers.end (); ++i)
    2438                 :      23017 :     ipa_update_overall_fn_summary ((*i)->inlined_to ? (*i)->inlined_to : *i);
    2439                 :      24449 :   return res;
    2440                 :            : }
    2441                 :            : 
    2442                 :            : /* Output overall time estimate.  */
    2443                 :            : static void
    2444                 :        384 : dump_overall_stats (void)
    2445                 :            : {
    2446                 :        384 :   sreal sum_weighted = 0, sum = 0;
    2447                 :        384 :   struct cgraph_node *node;
    2448                 :            : 
    2449                 :       3093 :   FOR_EACH_DEFINED_FUNCTION (node)
    2450                 :       2325 :     if (!node->inlined_to
    2451                 :       1697 :         && !node->alias)
    2452                 :            :       {
    2453                 :       3840 :         ipa_fn_summary *s = ipa_fn_summaries->get (node);
    2454                 :       1407 :         if (s != NULL)
    2455                 :            :           {
    2456                 :       1407 :           sum += s->time;
    2457                 :       1407 :           if (node->count.ipa ().initialized_p ())
    2458                 :         14 :             sum_weighted += s->time * node->count.ipa ().to_gcov_type ();
    2459                 :            :           }
    2460                 :            :       }
    2461                 :        384 :   fprintf (dump_file, "Overall time estimate: "
    2462                 :            :            "%f weighted by profile: "
    2463                 :            :            "%f\n", sum.to_double (), sum_weighted.to_double ());
    2464                 :        384 : }
    2465                 :            : 
    2466                 :            : /* Output some useful stats about inlining.  */
    2467                 :            : 
    2468                 :            : static void
    2469                 :        192 : dump_inline_stats (void)
    2470                 :            : {
    2471                 :        192 :   int64_t inlined_cnt = 0, inlined_indir_cnt = 0;
    2472                 :        192 :   int64_t inlined_virt_cnt = 0, inlined_virt_indir_cnt = 0;
    2473                 :        192 :   int64_t noninlined_cnt = 0, noninlined_indir_cnt = 0;
    2474                 :        192 :   int64_t noninlined_virt_cnt = 0, noninlined_virt_indir_cnt = 0;
    2475                 :        192 :   int64_t  inlined_speculative = 0, inlined_speculative_ply = 0;
    2476                 :        192 :   int64_t indirect_poly_cnt = 0, indirect_cnt = 0;
    2477                 :        192 :   int64_t reason[CIF_N_REASONS][2];
    2478                 :       5952 :   sreal reason_freq[CIF_N_REASONS];
    2479                 :        192 :   int i;
    2480                 :        192 :   struct cgraph_node *node;
    2481                 :            : 
    2482                 :        192 :   memset (reason, 0, sizeof (reason));
    2483                 :       5952 :   for (i=0; i < CIF_N_REASONS; i++)
    2484                 :       5760 :     reason_freq[i] = 0;
    2485                 :       2818 :   FOR_EACH_DEFINED_FUNCTION (node)
    2486                 :            :   {
    2487                 :       1217 :     struct cgraph_edge *e;
    2488                 :       6208 :     for (e = node->callees; e; e = e->next_callee)
    2489                 :            :       {
    2490                 :       4991 :         if (e->inline_failed)
    2491                 :            :           {
    2492                 :       4366 :             if (e->count.ipa ().initialized_p ())
    2493                 :       2609 :               reason[(int) e->inline_failed][0] += e->count.ipa ().to_gcov_type ();
    2494                 :       4366 :             reason_freq[(int) e->inline_failed] += e->sreal_frequency ();
    2495                 :       4366 :             reason[(int) e->inline_failed][1] ++;
    2496                 :       4366 :             if (DECL_VIRTUAL_P (e->callee->decl)
    2497                 :       4366 :                 && e->count.ipa ().initialized_p ())
    2498                 :            :               {
    2499                 :          0 :                 if (e->indirect_inlining_edge)
    2500                 :          0 :                   noninlined_virt_indir_cnt += e->count.ipa ().to_gcov_type ();
    2501                 :            :                 else
    2502                 :          0 :                   noninlined_virt_cnt += e->count.ipa ().to_gcov_type ();
    2503                 :            :               }
    2504                 :       4366 :             else if (e->count.ipa ().initialized_p ())
    2505                 :            :               {
    2506                 :       2609 :                 if (e->indirect_inlining_edge)
    2507                 :          0 :                   noninlined_indir_cnt += e->count.ipa ().to_gcov_type ();
    2508                 :            :                 else
    2509                 :       2609 :                   noninlined_cnt += e->count.ipa ().to_gcov_type ();
    2510                 :            :               }
    2511                 :            :           }
    2512                 :        625 :         else if (e->count.ipa ().initialized_p ())
    2513                 :            :           {
    2514                 :          0 :             if (e->speculative)
    2515                 :            :               {
    2516                 :          0 :                 if (DECL_VIRTUAL_P (e->callee->decl))
    2517                 :          0 :                   inlined_speculative_ply += e->count.ipa ().to_gcov_type ();
    2518                 :            :                 else
    2519                 :          0 :                   inlined_speculative += e->count.ipa ().to_gcov_type ();
    2520                 :            :               }
    2521                 :          0 :             else if (DECL_VIRTUAL_P (e->callee->decl))
    2522                 :            :               {
    2523                 :          0 :                 if (e->indirect_inlining_edge)
    2524                 :          0 :                   inlined_virt_indir_cnt += e->count.ipa ().to_gcov_type ();
    2525                 :            :                 else
    2526                 :          0 :                   inlined_virt_cnt += e->count.ipa ().to_gcov_type ();
    2527                 :            :               }
    2528                 :            :             else
    2529                 :            :               {
    2530                 :          0 :                 if (e->indirect_inlining_edge)
    2531                 :          0 :                   inlined_indir_cnt += e->count.ipa ().to_gcov_type ();
    2532                 :            :                 else
    2533                 :          0 :                   inlined_cnt += e->count.ipa ().to_gcov_type ();
    2534                 :            :               }
    2535                 :            :           }
    2536                 :            :       }
    2537                 :       1339 :     for (e = node->indirect_calls; e; e = e->next_callee)
    2538                 :        122 :       if (e->indirect_info->polymorphic
    2539                 :        122 :           & e->count.ipa ().initialized_p ())
    2540                 :          0 :         indirect_poly_cnt += e->count.ipa ().to_gcov_type ();
    2541                 :        122 :       else if (e->count.ipa ().initialized_p ())
    2542                 :          0 :         indirect_cnt += e->count.ipa ().to_gcov_type ();
    2543                 :            :   }
    2544                 :        192 :   if (max_count.initialized_p ())
    2545                 :            :     {
    2546                 :          0 :       fprintf (dump_file,
    2547                 :            :                "Inlined %" PRId64 " + speculative "
    2548                 :            :                "%" PRId64 " + speculative polymorphic "
    2549                 :            :                "%" PRId64 " + previously indirect "
    2550                 :            :                "%" PRId64 " + virtual "
    2551                 :            :                "%" PRId64 " + virtual and previously indirect "
    2552                 :            :                "%" PRId64 "\n" "Not inlined "
    2553                 :            :                "%" PRId64 " + previously indirect "
    2554                 :            :                "%" PRId64 " + virtual "
    2555                 :            :                "%" PRId64 " + virtual and previously indirect "
    2556                 :            :                "%" PRId64 " + still indirect "
    2557                 :            :                "%" PRId64 " + still indirect polymorphic "
    2558                 :            :                "%" PRId64 "\n", inlined_cnt,
    2559                 :            :                inlined_speculative, inlined_speculative_ply,
    2560                 :            :                inlined_indir_cnt, inlined_virt_cnt, inlined_virt_indir_cnt,
    2561                 :            :                noninlined_cnt, noninlined_indir_cnt, noninlined_virt_cnt,
    2562                 :            :                noninlined_virt_indir_cnt, indirect_cnt, indirect_poly_cnt);
    2563                 :          0 :       fprintf (dump_file, "Removed speculations ");
    2564                 :          0 :       spec_rem.dump (dump_file);
    2565                 :          0 :       fprintf (dump_file, "\n");
    2566                 :            :     }
    2567                 :        192 :   dump_overall_stats ();
    2568                 :        192 :   fprintf (dump_file, "\nWhy inlining failed?\n");
    2569                 :       5952 :   for (i = 0; i < CIF_N_REASONS; i++)
    2570                 :       5760 :     if (reason[i][1])
    2571                 :        196 :       fprintf (dump_file, "%-50s: %8i calls, %8f freq, %" PRId64" count\n",
    2572                 :            :                cgraph_inline_failed_string ((cgraph_inline_failed_t) i),
    2573                 :            :                (int) reason[i][1], reason_freq[i].to_double (), reason[i][0]);
    2574                 :        192 : }
    2575                 :            : 
    2576                 :            : /* Called when node is removed.  */
    2577                 :            : 
    2578                 :            : static void
    2579                 :          0 : flatten_remove_node_hook (struct cgraph_node *node, void *data)
    2580                 :            : {
    2581                 :          0 :   if (lookup_attribute ("flatten", DECL_ATTRIBUTES (node->decl)) == NULL)
    2582                 :            :     return;
    2583                 :            : 
    2584                 :          0 :   hash_set<struct cgraph_node *> *removed
    2585                 :            :     = (hash_set<struct cgraph_node *> *) data;
    2586                 :          0 :   removed->add (node);
    2587                 :            : }
    2588                 :            : 
    2589                 :            : /* Decide on the inlining.  We do so in the topological order to avoid
    2590                 :            :    expenses on updating data structures.  */
    2591                 :            : 
    2592                 :            : static unsigned int
    2593                 :     163701 : ipa_inline (void)
    2594                 :            : {
    2595                 :     163701 :   struct cgraph_node *node;
    2596                 :     163701 :   int nnodes;
    2597                 :     163701 :   struct cgraph_node **order;
    2598                 :     163701 :   int i, j;
    2599                 :     163701 :   int cold;
    2600                 :     163701 :   bool remove_functions = false;
    2601                 :            : 
    2602                 :     163701 :   order = XCNEWVEC (struct cgraph_node *, symtab->cgraph_count);
    2603                 :            : 
    2604                 :     163701 :   if (dump_file)
    2605                 :        192 :     ipa_dump_fn_summaries (dump_file);
    2606                 :            : 
    2607                 :     163701 :   nnodes = ipa_reverse_postorder (order);
    2608                 :     163701 :   spec_rem = profile_count::zero ();
    2609                 :            : 
    2610                 :    5513860 :   FOR_EACH_FUNCTION (node)
    2611                 :            :     {
    2612                 :    2593230 :       node->aux = 0;
    2613                 :            : 
    2614                 :            :       /* Recompute the default reasons for inlining because they may have
    2615                 :            :          changed during merging.  */
    2616                 :    2593230 :       if (in_lto_p)
    2617                 :            :         {
    2618                 :     276101 :           for (cgraph_edge *e = node->callees; e; e = e->next_callee)
    2619                 :            :             {
    2620                 :     188477 :               gcc_assert (e->inline_failed);
    2621                 :     188477 :               initialize_inline_failed (e);
    2622                 :            :             }
    2623                 :      88686 :           for (cgraph_edge *e = node->indirect_calls; e; e = e->next_callee)
    2624                 :       1062 :             initialize_inline_failed (e);
    2625                 :            :         }
    2626                 :            :     }
    2627                 :            : 
    2628                 :     163701 :   if (dump_file)
    2629                 :        192 :     fprintf (dump_file, "\nFlattening functions:\n");
    2630                 :            : 
    2631                 :            :   /* First shrink order array, so that it only contains nodes with
    2632                 :            :      flatten attribute.  */
    2633                 :    2756930 :   for (i = nnodes - 1, j = i; i >= 0; i--)
    2634                 :            :     {
    2635                 :    2593230 :       node = order[i];
    2636                 :    2593230 :       if (node->definition
    2637                 :            :           /* Do not try to flatten aliases.  These may happen for example when
    2638                 :            :              creating local aliases.  */
    2639                 :    2593230 :           && !node->alias
    2640                 :    3719340 :           && lookup_attribute ("flatten",
    2641                 :    1126110 :                                DECL_ATTRIBUTES (node->decl)) != NULL)
    2642                 :         66 :         order[j--] = order[i];
    2643                 :            :     }
    2644                 :            : 
    2645                 :            :   /* After the above loop, order[j + 1] ... order[nnodes - 1] contain
    2646                 :            :      nodes with flatten attribute.  If there is more than one such
    2647                 :            :      node, we need to register a node removal hook, as flatten_function
    2648                 :            :      could remove other nodes with flatten attribute.  See PR82801.  */
    2649                 :     163701 :   struct cgraph_node_hook_list *node_removal_hook_holder = NULL;
    2650                 :     163701 :   hash_set<struct cgraph_node *> *flatten_removed_nodes = NULL;
    2651                 :     163701 :   if (j < nnodes - 2)
    2652                 :            :     {
    2653                 :          6 :       flatten_removed_nodes = new hash_set<struct cgraph_node *>;
    2654                 :          6 :       node_removal_hook_holder
    2655                 :          6 :         = symtab->add_cgraph_removal_hook (&flatten_remove_node_hook,
    2656                 :            :                                            flatten_removed_nodes);
    2657                 :            :     }
    2658                 :            : 
    2659                 :            :   /* In the first pass handle functions to be flattened.  Do this with
    2660                 :            :      a priority so none of our later choices will make this impossible.  */
    2661                 :     163767 :   for (i = nnodes - 1; i > j; i--)
    2662                 :            :     {
    2663                 :         66 :       node = order[i];
    2664                 :         66 :       if (flatten_removed_nodes
    2665                 :         99 :           && flatten_removed_nodes->contains (node))
    2666                 :          0 :         continue;
    2667                 :            : 
    2668                 :            :       /* Handle nodes to be flattened.
    2669                 :            :          Ideally when processing callees we stop inlining at the
    2670                 :            :          entry of cycles, possibly cloning that entry point and
    2671                 :            :          try to flatten itself turning it into a self-recursive
    2672                 :            :          function.  */
    2673                 :         66 :       if (dump_file)
    2674                 :          4 :         fprintf (dump_file, "Flattening %s\n", node->dump_name ());
    2675                 :         66 :       flatten_function (node, false, true);
    2676                 :            :     }
    2677                 :            : 
    2678                 :     163701 :   if (j < nnodes - 2)
    2679                 :            :     {
    2680                 :          6 :       symtab->remove_cgraph_removal_hook (node_removal_hook_holder);
    2681                 :         12 :       delete flatten_removed_nodes;
    2682                 :            :     }
    2683                 :     163701 :   free (order);
    2684                 :            : 
    2685                 :     163701 :   if (dump_file)
    2686                 :        192 :     dump_overall_stats ();
    2687                 :            : 
    2688                 :     163701 :   inline_small_functions ();
    2689                 :            : 
    2690                 :     163701 :   gcc_assert (symtab->state == IPA_SSA);
    2691                 :     163701 :   symtab->state = IPA_SSA_AFTER_INLINING;
    2692                 :            :   /* Do first after-inlining removal.  We want to remove all "stale" extern
    2693                 :            :      inline functions and virtual functions so we really know what is called
    2694                 :            :      once.  */
    2695                 :     163701 :   symtab->remove_unreachable_nodes (dump_file);
    2696                 :            : 
    2697                 :            :   /* Inline functions with a property that after inlining into all callers the
    2698                 :            :      code size will shrink because the out-of-line copy is eliminated. 
    2699                 :            :      We do this regardless on the callee size as long as function growth limits
    2700                 :            :      are met.  */
    2701                 :     163701 :   if (dump_file)
    2702                 :        192 :     fprintf (dump_file,
    2703                 :            :              "\nDeciding on functions to be inlined into all callers and "
    2704                 :            :              "removing useless speculations:\n");
    2705                 :            : 
    2706                 :            :   /* Inlining one function called once has good chance of preventing
    2707                 :            :      inlining other function into the same callee.  Ideally we should
    2708                 :            :      work in priority order, but probably inlining hot functions first
    2709                 :            :      is good cut without the extra pain of maintaining the queue.
    2710                 :            : 
    2711                 :            :      ??? this is not really fitting the bill perfectly: inlining function
    2712                 :            :      into callee often leads to better optimization of callee due to
    2713                 :            :      increased context for optimization.
    2714                 :            :      For example if main() function calls a function that outputs help
    2715                 :            :      and then function that does the main optimization, we should inline
    2716                 :            :      the second with priority even if both calls are cold by themselves.
    2717                 :            : 
    2718                 :            :      We probably want to implement new predicate replacing our use of
    2719                 :            :      maybe_hot_edge interpreted as maybe_hot_edge || callee is known
    2720                 :            :      to be hot.  */
    2721                 :     491103 :   for (cold = 0; cold <= 1; cold ++)
    2722                 :            :     {
    2723                 :    6989780 :       FOR_EACH_DEFINED_FUNCTION (node)
    2724                 :            :         {
    2725                 :    3167490 :           struct cgraph_edge *edge, *next;
    2726                 :    3167490 :           bool update=false;
    2727                 :            : 
    2728                 :    3167490 :           if (!opt_for_fn (node->decl, optimize)
    2729                 :    3167490 :               || !opt_for_fn (node->decl, flag_inline_functions_called_once))
    2730                 :     563180 :             continue;
    2731                 :            : 
    2732                 :   10861200 :           for (edge = node->callees; edge; edge = next)
    2733                 :            :             {
    2734                 :    8256840 :               next = edge->next_callee;
    2735                 :    8256840 :               if (edge->speculative && !speculation_useful_p (edge, false))
    2736                 :            :                 {
    2737                 :        119 :                   if (edge->count.ipa ().initialized_p ())
    2738                 :          0 :                     spec_rem += edge->count.ipa ();
    2739                 :        119 :                   cgraph_edge::resolve_speculation (edge);
    2740                 :        119 :                   update = true;
    2741                 :        119 :                   remove_functions = true;
    2742                 :            :                 }
    2743                 :            :             }
    2744                 :    2604310 :           if (update)
    2745                 :            :             {
    2746                 :        206 :               struct cgraph_node *where = node->inlined_to
    2747                 :        103 :                                           ? node->inlined_to : node;
    2748                 :        103 :               reset_edge_caches (where);
    2749                 :        103 :               ipa_update_overall_fn_summary (where);
    2750                 :            :             }
    2751                 :    2604310 :           if (want_inline_function_to_all_callers_p (node, cold))
    2752                 :            :             {
    2753                 :      22362 :               int num_calls = 0;
    2754                 :      22362 :               node->call_for_symbol_and_aliases (sum_callers, &num_calls,
    2755                 :            :                                                  true);
    2756                 :      22933 :               while (node->call_for_symbol_and_aliases
    2757                 :      22933 :                        (inline_to_all_callers, &num_calls, true))
    2758                 :            :                 ;
    2759                 :      22362 :               remove_functions = true;
    2760                 :            :             }
    2761                 :            :         }
    2762                 :            :     }
    2763                 :            : 
    2764                 :            :   /* Free ipa-prop structures if they are no longer needed.  */
    2765                 :     163701 :   ipa_free_all_structures_after_iinln ();
    2766                 :            : 
    2767                 :     163701 :   if (dump_enabled_p ())
    2768                 :        243 :     dump_printf (MSG_NOTE,
    2769                 :            :                  "\nInlined %i calls, eliminated %i functions\n\n",
    2770                 :            :                  ncalls_inlined, nfunctions_inlined);
    2771                 :     163701 :   if (dump_file)
    2772                 :        192 :     dump_inline_stats ();
    2773                 :            : 
    2774                 :     163701 :   if (dump_file)
    2775                 :        192 :     ipa_dump_fn_summaries (dump_file);
    2776                 :     163701 :   return remove_functions ? TODO_remove_functions : 0;
    2777                 :            : }
    2778                 :            : 
    2779                 :            : /* Inline always-inline function calls in NODE.  */
    2780                 :            : 
    2781                 :            : static bool
    2782                 :    1847180 : inline_always_inline_functions (struct cgraph_node *node)
    2783                 :            : {
    2784                 :    1847180 :   struct cgraph_edge *e;
    2785                 :    1847180 :   bool inlined = false;
    2786                 :            : 
    2787                 :    8016450 :   for (e = node->callees; e; e = e->next_callee)
    2788                 :            :     {
    2789                 :    6169260 :       struct cgraph_node *callee = e->callee->ultimate_alias_target ();
    2790                 :    6169260 :       if (!DECL_DISREGARD_INLINE_LIMITS (callee->decl))
    2791                 :    6111410 :         continue;
    2792                 :            : 
    2793                 :      57856 :       if (e->recursive_p ())
    2794                 :            :         {
    2795                 :          6 :           if (dump_enabled_p ())
    2796                 :          0 :             dump_printf_loc (MSG_MISSED_OPTIMIZATION, e->call_stmt,
    2797                 :            :                              "  Not inlining recursive call to %C.\n",
    2798                 :            :                              e->callee);
    2799                 :          6 :           e->inline_failed = CIF_RECURSIVE_INLINING;
    2800                 :          6 :           continue;
    2801                 :            :         }
    2802                 :            : 
    2803                 :      57850 :       if (!can_early_inline_edge_p (e))
    2804                 :            :         {
    2805                 :            :           /* Set inlined to true if the callee is marked "always_inline" but
    2806                 :            :              is not inlinable.  This will allow flagging an error later in
    2807                 :            :              expand_call_inline in tree-inline.c.  */
    2808                 :         30 :           if (lookup_attribute ("always_inline",
    2809                 :         30 :                                  DECL_ATTRIBUTES (callee->decl)) != NULL)
    2810                 :         17 :             inlined = true;
    2811                 :         30 :           continue;
    2812                 :            :         }
    2813                 :            : 
    2814                 :      57820 :       if (dump_enabled_p ())
    2815                 :          9 :         dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, e->call_stmt,
    2816                 :            :                          "  Inlining %C into %C (always_inline).\n",
    2817                 :            :                          e->callee, e->caller);
    2818                 :      57820 :       inline_call (e, true, NULL, NULL, false);
    2819                 :      57820 :       inlined = true;
    2820                 :            :     }
    2821                 :    1847180 :   if (inlined)
    2822                 :      34439 :     ipa_update_overall_fn_summary (node);
    2823                 :            : 
    2824                 :    1847180 :   return inlined;
    2825                 :            : }
    2826                 :            : 
    2827                 :            : /* Decide on the inlining.  We do so in the topological order to avoid
    2828                 :            :    expenses on updating data structures.  */
    2829                 :            : 
    2830                 :            : static bool
    2831                 :    1522770 : early_inline_small_functions (struct cgraph_node *node)
    2832                 :            : {
    2833                 :    1522770 :   struct cgraph_edge *e;
    2834                 :    1522770 :   bool inlined = false;
    2835                 :            : 
    2836                 :    6744830 :   for (e = node->callees; e; e = e->next_callee)
    2837                 :            :     {
    2838                 :    5222060 :       struct cgraph_node *callee = e->callee->ultimate_alias_target ();
    2839                 :            : 
    2840                 :            :       /* We can encounter not-yet-analyzed function during
    2841                 :            :          early inlining on callgraphs with strongly
    2842                 :            :          connected components.  */
    2843                 :    5222060 :       ipa_fn_summary *s = ipa_fn_summaries->get (callee);
    2844                 :    2640540 :       if (s == NULL || !s->inlinable || !e->inline_failed)
    2845                 :    2805470 :         continue;
    2846                 :            : 
    2847                 :            :       /* Do not consider functions not declared inline.  */
    2848                 :    2416590 :       if (!DECL_DECLARED_INLINE_P (callee->decl)
    2849                 :     524977 :           && !opt_for_fn (node->decl, flag_inline_small_functions)
    2850                 :    2443360 :           && !opt_for_fn (node->decl, flag_inline_functions))
    2851                 :      26629 :         continue;
    2852                 :            : 
    2853                 :    2389960 :       if (dump_enabled_p ())
    2854                 :        179 :         dump_printf_loc (MSG_NOTE, e->call_stmt,
    2855                 :            :                          "Considering inline candidate %C.\n",
    2856                 :            :                          callee);
    2857                 :            : 
    2858                 :    2389960 :       if (!can_early_inline_edge_p (e))
    2859                 :      69236 :         continue;
    2860                 :            : 
    2861                 :    2320720 :       if (e->recursive_p ())
    2862                 :            :         {
    2863                 :       6117 :           if (dump_enabled_p ())
    2864                 :          0 :             dump_printf_loc (MSG_MISSED_OPTIMIZATION, e->call_stmt,
    2865                 :            :                              "  Not inlining: recursive call.\n");
    2866                 :       6117 :           continue;
    2867                 :            :         }
    2868                 :            : 
    2869                 :    2314610 :       if (!want_early_inline_function_p (e))
    2870                 :     541548 :         continue;
    2871                 :            : 
    2872                 :    1773060 :       if (dump_enabled_p ())
    2873                 :        126 :         dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, e->call_stmt,
    2874                 :            :                          " Inlining %C into %C.\n",
    2875                 :            :                          callee, e->caller);
    2876                 :    1773060 :       inline_call (e, true, NULL, NULL, false);
    2877                 :    1773060 :       inlined = true;
    2878                 :            :     }
    2879                 :            : 
    2880                 :    1522770 :   if (inlined)
    2881                 :     633139 :     ipa_update_overall_fn_summary (node);
    2882                 :            : 
    2883                 :    1522770 :   return inlined;
    2884                 :            : }
    2885                 :            : 
    2886                 :            : unsigned int
    2887                 :    1847200 : early_inliner (function *fun)
    2888                 :            : {
    2889                 :    1847200 :   struct cgraph_node *node = cgraph_node::get (current_function_decl);
    2890                 :    1847200 :   struct cgraph_edge *edge;
    2891                 :    1847200 :   unsigned int todo = 0;
    2892                 :    1847200 :   int iterations = 0;
    2893                 :    1847200 :   bool inlined = false;
    2894                 :            : 
    2895                 :    1847200 :   if (seen_error ())
    2896                 :            :     return 0;
    2897                 :            : 
    2898                 :            :   /* Do nothing if datastructures for ipa-inliner are already computed.  This
    2899                 :            :      happens when some pass decides to construct new function and
    2900                 :            :      cgraph_add_new_function calls lowering passes and early optimization on
    2901                 :            :      it.  This may confuse ourself when early inliner decide to inline call to
    2902                 :            :      function clone, because function clones don't have parameter list in
    2903                 :            :      ipa-prop matching their signature.  */
    2904                 :    1847190 :   if (ipa_node_params_sum)
    2905                 :            :     return 0;
    2906                 :            : 
    2907                 :    1847180 :   if (flag_checking)
    2908                 :    1847170 :     node->verify ();
    2909                 :    1847180 :   node->remove_all_references ();
    2910                 :            : 
    2911                 :            :   /* Even when not optimizing or not inlining inline always-inline
    2912                 :            :      functions.  */
    2913                 :    1847180 :   inlined = inline_always_inline_functions (node);
    2914                 :            : 
    2915                 :    1847180 :   if (!optimize
    2916                 :    1590570 :       || flag_no_inline
    2917                 :    1580070 :       || !flag_early_inlining
    2918                 :            :       /* Never inline regular functions into always-inline functions
    2919                 :            :          during incremental inlining.  This sucks as functions calling
    2920                 :            :          always inline functions will get less optimized, but at the
    2921                 :            :          same time inlining of functions calling always inline
    2922                 :            :          function into an always inline function might introduce
    2923                 :            :          cycles of edges to be always inlined in the callgraph.
    2924                 :            : 
    2925                 :            :          We might want to be smarter and just avoid this type of inlining.  */
    2926                 :    3425660 :       || (DECL_DISREGARD_INLINE_LIMITS (node->decl)
    2927                 :      56295 :           && lookup_attribute ("always_inline",
    2928                 :    1578480 :                                DECL_ATTRIBUTES (node->decl))))
    2929                 :            :     ;
    2930                 :    1522710 :   else if (lookup_attribute ("flatten",
    2931                 :    1522710 :                              DECL_ATTRIBUTES (node->decl)) != NULL)
    2932                 :            :     {
    2933                 :            :       /* When the function is marked to be flattened, recursively inline
    2934                 :            :          all calls in it.  */
    2935                 :         42 :       if (dump_enabled_p ())
    2936                 :          0 :         dump_printf (MSG_OPTIMIZED_LOCATIONS,
    2937                 :            :                      "Flattening %C\n", node);
    2938                 :         42 :       flatten_function (node, true, true);
    2939                 :         42 :       inlined = true;
    2940                 :            :     }
    2941                 :            :   else
    2942                 :            :     {
    2943                 :            :       /* If some always_inline functions was inlined, apply the changes.
    2944                 :            :          This way we will not account always inline into growth limits and
    2945                 :            :          moreover we will inline calls from always inlines that we skipped
    2946                 :            :          previously because of conditional above.  */
    2947                 :    1522670 :       if (inlined)
    2948                 :            :         {
    2949                 :      11885 :           timevar_push (TV_INTEGRATION);
    2950                 :      11885 :           todo |= optimize_inline_calls (current_function_decl);
    2951                 :            :           /* optimize_inline_calls call above might have introduced new
    2952                 :            :              statements that don't have inline parameters computed.  */
    2953                 :     112036 :           for (edge = node->callees; edge; edge = edge->next_callee)
    2954                 :            :             {
    2955                 :            :               /* We can enounter not-yet-analyzed function during
    2956                 :            :                  early inlining on callgraphs with strongly
    2957                 :            :                  connected components.  */
    2958                 :     100151 :               ipa_call_summary *es = ipa_call_summaries->get_create (edge);
    2959                 :     100151 :               es->call_stmt_size
    2960                 :     100151 :                 = estimate_num_insns (edge->call_stmt, &eni_size_weights);
    2961                 :     100151 :               es->call_stmt_time
    2962                 :     100151 :                 = estimate_num_insns (edge->call_stmt, &eni_time_weights);
    2963                 :            :             }
    2964                 :      11885 :           ipa_update_overall_fn_summary (node);
    2965                 :      11885 :           inlined = false;
    2966                 :      11885 :           timevar_pop (TV_INTEGRATION);
    2967                 :            :         }
    2968                 :            :       /* We iterate incremental inlining to get trivial cases of indirect
    2969                 :            :          inlining.  */
    2970                 :    4311610 :       while (iterations < opt_for_fn (node->decl,
    2971                 :            :                                       param_early_inliner_max_iterations)
    2972                 :    2155810 :              && early_inline_small_functions (node))
    2973                 :            :         {
    2974                 :     633139 :           timevar_push (TV_INTEGRATION);
    2975                 :     633139 :           todo |= optimize_inline_calls (current_function_decl);
    2976                 :            : 
    2977                 :            :           /* Technically we ought to recompute inline parameters so the new
    2978                 :            :              iteration of early inliner works as expected.  We however have
    2979                 :            :              values approximately right and thus we only need to update edge
    2980                 :            :              info that might be cleared out for newly discovered edges.  */
    2981                 :    2096880 :           for (edge = node->callees; edge; edge = edge->next_callee)
    2982                 :            :             {
    2983                 :            :               /* We have no summary for new bound store calls yet.  */
    2984                 :    1463740 :               ipa_call_summary *es = ipa_call_summaries->get_create (edge);
    2985                 :    1463740 :               es->call_stmt_size
    2986                 :    1463740 :                 = estimate_num_insns (edge->call_stmt, &eni_size_weights);
    2987                 :    1463740 :               es->call_stmt_time
    2988                 :    1463740 :                 = estimate_num_insns (edge->call_stmt, &eni_time_weights);
    2989                 :            :             }
    2990                 :     633139 :           if (iterations < opt_for_fn (node->decl,
    2991                 :     633139 :                                        param_early_inliner_max_iterations) - 1)
    2992                 :        105 :             ipa_update_overall_fn_summary (node);
    2993                 :     633139 :           timevar_pop (TV_INTEGRATION);
    2994                 :     633139 :           iterations++;
    2995                 :     633139 :           inlined = false;
    2996                 :            :         }
    2997                 :    1522670 :       if (dump_file)
    2998                 :        220 :         fprintf (dump_file, "Iterations: %i\n", iterations);
    2999                 :            :     }
    3000                 :            : 
    3001                 :    1847180 :   if (inlined)
    3002                 :            :     {
    3003                 :      22596 :       timevar_push (TV_INTEGRATION);
    3004                 :      22596 :       todo |= optimize_inline_calls (current_function_decl);
    3005                 :      22596 :       timevar_pop (TV_INTEGRATION);
    3006                 :            :     }
    3007                 :            : 
    3008                 :    1847180 :   fun->always_inline_functions_inlined = true;
    3009                 :            : 
    3010                 :    1847180 :   return todo;
    3011                 :            : }
    3012                 :            : 
    3013                 :            : /* Do inlining of small functions.  Doing so early helps profiling and other
    3014                 :            :    passes to be somewhat more effective and avoids some code duplication in
    3015                 :            :    later real inlining pass for testcases with very many function calls.  */
    3016                 :            : 
    3017                 :            : namespace {
    3018                 :            : 
    3019                 :            : const pass_data pass_data_early_inline =
    3020                 :            : {
    3021                 :            :   GIMPLE_PASS, /* type */
    3022                 :            :   "einline", /* name */
    3023                 :            :   OPTGROUP_INLINE, /* optinfo_flags */
    3024                 :            :   TV_EARLY_INLINING, /* tv_id */
    3025                 :            :   PROP_ssa, /* properties_required */
    3026                 :            :   0, /* properties_provided */
    3027                 :            :   0, /* properties_destroyed */
    3028                 :            :   0, /* todo_flags_start */
    3029                 :            :   0, /* todo_flags_finish */
    3030                 :            : };
    3031                 :            : 
    3032                 :            : class pass_early_inline : public gimple_opt_pass
    3033                 :            : {
    3034                 :            : public:
    3035                 :     200773 :   pass_early_inline (gcc::context *ctxt)
    3036                 :     401546 :     : gimple_opt_pass (pass_data_early_inline, ctxt)
    3037                 :            :   {}
    3038                 :            : 
    3039                 :            :   /* opt_pass methods: */
    3040                 :            :   virtual unsigned int execute (function *);
    3041                 :            : 
    3042                 :            : }; // class pass_early_inline
    3043                 :            : 
    3044                 :            : unsigned int
    3045                 :    1847200 : pass_early_inline::execute (function *fun)
    3046                 :            : {
    3047                 :    1847200 :   return early_inliner (fun);
    3048                 :            : }
    3049                 :            : 
    3050                 :            : } // anon namespace
    3051                 :            : 
    3052                 :            : gimple_opt_pass *
    3053                 :     200773 : make_pass_early_inline (gcc::context *ctxt)
    3054                 :            : {
    3055                 :     200773 :   return new pass_early_inline (ctxt);
    3056                 :            : }
    3057                 :            : 
    3058                 :            : namespace {
    3059                 :            : 
    3060                 :            : const pass_data pass_data_ipa_inline =
    3061                 :            : {
    3062                 :            :   IPA_PASS, /* type */
    3063                 :            :   "inline", /* name */
    3064                 :            :   OPTGROUP_INLINE, /* optinfo_flags */
    3065                 :            :   TV_IPA_INLINING, /* tv_id */
    3066                 :            :   0, /* properties_required */
    3067                 :            :   0, /* properties_provided */
    3068                 :            :   0, /* properties_destroyed */
    3069                 :            :   0, /* todo_flags_start */
    3070                 :            :   ( TODO_dump_symtab ), /* todo_flags_finish */
    3071                 :            : };
    3072                 :            : 
    3073                 :            : class pass_ipa_inline : public ipa_opt_pass_d
    3074                 :            : {
    3075                 :            : public:
    3076                 :     200773 :   pass_ipa_inline (gcc::context *ctxt)
    3077                 :            :     : ipa_opt_pass_d (pass_data_ipa_inline, ctxt,
    3078                 :            :                       NULL, /* generate_summary */
    3079                 :            :                       NULL, /* write_summary */
    3080                 :            :                       NULL, /* read_summary */
    3081                 :            :                       NULL, /* write_optimization_summary */
    3082                 :            :                       NULL, /* read_optimization_summary */
    3083                 :            :                       NULL, /* stmt_fixup */
    3084                 :            :                       0, /* function_transform_todo_flags_start */
    3085                 :            :                       inline_transform, /* function_transform */
    3086                 :     401546 :                       NULL) /* variable_transform */
    3087                 :            :   {}
    3088                 :            : 
    3089                 :            :   /* opt_pass methods: */
    3090                 :     163701 :   virtual unsigned int execute (function *) { return ipa_inline (); }
    3091                 :            : 
    3092                 :            : }; // class pass_ipa_inline
    3093                 :            : 
    3094                 :            : } // anon namespace
    3095                 :            : 
    3096                 :            : ipa_opt_pass_d *
    3097                 :     200773 : make_pass_ipa_inline (gcc::context *ctxt)
    3098                 :            : {
    3099                 :     200773 :   return new pass_ipa_inline (ctxt);
    3100                 :            : }

Generated by: LCOV version 1.0

LCOV profile is generated on x86_64 machine using following configure options: configure --disable-bootstrap --enable-coverage=opt --enable-languages=c,c++,fortran,go,jit,lto --enable-host-shared. GCC test suite is run with the built compiler.