2025-07-18T10:17:52+08:00 - gpustack.worker.backends.base - INFO - Preparing model files... 2025-07-18T10:17:52+08:00 - gpustack.worker.backends.base - INFO - Model files are ready. 2025-07-18T10:17:53+08:00 - gpustack.worker.backends.vllm - INFO - Starting vllm server INFO 07-18 10:18:01 [__init__.py:240] Automatically detected platform cuda. INFO 07-18 10:18:04 [api_server.py:1034] vLLM API server version 0.8.3 INFO 07-18 10:18:04 [api_server.py:1035] args: Namespace(subparser='serve', model_tag='/data/models/BAAI/bge-reranker-v2-m3', config='', host='0.0.0.0', port=40046, uvicorn_log_level='info', disable_uvicorn_access_log=False, allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/data/models/BAAI/bge-reranker-v2-m3', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=, dtype='auto', kv_cache_dtype='auto', max_model_len=8192, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, data_parallel_size=1, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, prefix_caching_hash_algo='builtin', disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=None, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_config=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['bge-reranker-v2-m3'], qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_cascade_attn=False, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False, dispatch_function=) INFO 07-18 10:18:04 [config.py:2712] Downcasting torch.float32 to torch.float16. INFO 07-18 10:18:21 [config.py:610] This model supports multiple tasks: {'score', 'embed', 'reward', 'classify'}. Defaulting to 'score'. WARNING 07-18 10:18:21 [config.py:446] Please export VLLM_ENFORCE_CUDA_GRAPH=1 to enable cuda graph. For now, cuda graph is not used and --enforce-eager is disabled ,we are trying to use cuda graph as the default mode WARNING 07-18 10:18:21 [cuda.py:95] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used INFO 07-18 10:18:21 [api_server.py:246] Started engine process with PID 3271847 INFO 07-18 10:18:29 [__init__.py:240] Automatically detected platform cuda. INFO 07-18 10:18:32 [llm_engine.py:242] Initializing a V0 LLM engine (v0.8.3) with config: model='/data/models/BAAI/bge-reranker-v2-m3', speculative_config=None, tokenizer='/data/models/BAAI/bge-reranker-v2-m3', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=8192, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=bge-reranker-v2-m3, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=None, chunked_prefill_enabled=False, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=PoolerConfig(pooling_type=None, normalize=None, softmax=None, step_tag_id=None, returned_token_ids=None), compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[],"max_capture_size":0}, use_cached_outputs=True, /usr/local/corex/lib64/python3/dist-packages/vllm/executor/uniproc_executor.py:29: ResourceWarning: unclosed get_ip(), get_open_port()) ResourceWarning: Enable tracemalloc to get the object allocation traceback INFO 07-18 10:18:33 [cuda.py:291] Using Flash Attention backend. INFO 07-18 10:18:33 [parallel_state.py:991] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0 INFO 07-18 10:18:33 [model_runner.py:1110] Starting to load model /data/models/BAAI/bge-reranker-v2-m3... Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00Carson City is the capital city of the American state of Nevada. At the 2010 United States Census, Carson City had a population of 55,274.', params: PoolingParams(additional_metadata=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None. INFO 07-18 11:29:06 [logger.py:39] Received request rerank-c9a70afcdfe04b78a90dacc06bf75e86-1: prompt: 'What is the capital of the United States?The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean that are a political division controlled by the United States. Its capital is Saipan.', params: PoolingParams(additional_metadata=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None. INFO 07-18 11:29:06 [engine.py:310] Added request rerank-c9a70afcdfe04b78a90dacc06bf75e86-0. INFO 07-18 11:29:06 [engine.py:310] Added request rerank-c9a70afcdfe04b78a90dacc06bf75e86-1. INFO 07-18 11:29:07 [metrics.py:488] Avg prompt throughput: 13.5 tokens/s, Avg generation throughput: 0.3 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%. INFO: 127.0.0.1:53762 - "POST /v1/rerank HTTP/1.1" 200 OK INFO 07-18 11:29:17 [metrics.py:488] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.