2024年3月14日 星期四

OpenAI API 學習筆記 : 使用 openai 套件串接 GPT 語言模型 (一)

昨天維元老師的 ChatGPT 實戰內訓課程講到 OpenAI API 串接實作, 我因在前一篇測試中已經註冊 OpenAI 帳戶並建立了存取 GPT 模型的金鑰, 所以就能在課堂上邊聽邊在 Colab 上測試, 本篇記錄利用 openai 這個 Python 套件與模型的串接與對話.

本系列之前的文章參考 :


要串接 OpenAI API 必須先取得金鑰才能進行本篇測試. 


一. 安裝 openai 套件 : 

串接 OpenAI API 必須先安裝 openai 套件 : 

pip install openai   

如果是在 Colab 則指令前面要冠驚嘆號 :

!pip install openai   

D:\python\test>pip install openai    
Collecting openai
  Downloading openai-1.14.0-py3-none-any.whl.metadata (18 kB)
Requirement already satisfied: anyio<5,>=3.5.0 in c:\users\tony1\appdata\roaming\python\python310\site-packages (from openai) (3.7.1)
Collecting distro<2,>=1.7.0 (from openai)
  Downloading distro-1.9.0-py3-none-any.whl.metadata (6.8 kB)
Requirement already satisfied: httpx<1,>=0.23.0 in c:\users\tony1\appdata\roaming\python\python310\site-packages (from openai) (0.24.1)
Requirement already satisfied: pydantic<3,>=1.9.0 in c:\users\tony1\appdata\roaming\python\python310\site-packages (from openai) (2.5.3)
Requirement already satisfied: sniffio in c:\users\tony1\appdata\roaming\python\python310\site-packages (from openai) (1.3.0)
Requirement already satisfied: tqdm>4 in c:\users\tony1\appdata\roaming\python\python310\site-packages (from openai) (4.66.1)
Requirement already satisfied: typing-extensions<5,>=4.7 in c:\users\tony1\appdata\roaming\python\python310\site-packages (from openai) (4.9.0)
Requirement already satisfied: idna>=2.8 in c:\users\tony1\appdata\roaming\python\python310\site-packages (from anyio<5,>=3.5.0->openai) (3.4)
Requirement already satisfied: exceptiongroup in c:\users\tony1\appdata\roaming\python\python310\site-packages (from anyio<5,>=3.5.0->openai) (1.1.3)
Requirement already satisfied: certifi in c:\users\tony1\appdata\roaming\python\python310\site-packages (from httpx<1,>=0.23.0->openai) (2023.7.22)
Requirement already satisfied: httpcore<0.18.0,>=0.15.0 in c:\users\tony1\appdata\roaming\python\python310\site-packages (from httpx<1,>=0.23.0->openai) (0.17.3)
Requirement already satisfied: annotated-types>=0.4.0 in c:\users\tony1\appdata\roaming\python\python310\site-packages (from pydantic<3,>=1.9.0->openai) (0.5.0)
Requirement already satisfied: pydantic-core==2.14.6 in c:\users\tony1\appdata\roaming\python\python310\site-packages (from pydantic<3,>=1.9.0->openai) (2.14.6)
Requirement already satisfied: colorama in c:\users\tony1\appdata\local\programs\thonny\lib\site-packages (from tqdm>4->openai) (0.4.6)
Requirement already satisfied: h11<0.15,>=0.13 in c:\users\tony1\appdata\roaming\python\python310\site-packages (from httpcore<0.18.0,>=0.15.0->httpx<1,>=0.23.0->openai) (0.14.0)
Downloading openai-1.14.0-py3-none-any.whl (257 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 257.5/257.5 kB 933.1 kB/s eta 0:00:00
Downloading distro-1.9.0-py3-none-any.whl (20 kB)
Installing collected packages: distro, openai
Successfully installed distro-1.9.0 openai-1.14.0

先用 import openai 匯入套件, 然後用 dir() 檢視套件內容 : 

>>> import openai   
>>> print(openai.__version__)  
1.14.0
>>> dir(openai)     
['APIConnectionError', 'APIError', 'APIResponse', 'APIResponseValidationError', 'APIStatusError', 'APITimeoutError', 'AssistantEventHandler', 'AsyncAPIResponse', 'AsyncAssistantEventHandler', 'AsyncAzureOpenAI', 'AsyncClient', 'AsyncOpenAI', 'AsyncStream', 'Audio', 'AuthenticationError', 'AzureOpenAI', 'BadRequestError', 'BaseModel', 'ChatCompletion', 'Client', 'Completion', 'ConflictError', 'Customer', 'DEFAULT_MAX_RETRIES', 'DEFAULT_TIMEOUT', 'Deployment', 'Edit', 'Embedding', 'Engine', 'ErrorObject', 'File', 'FineTune', 'FineTuningJob', 'Image', 'InternalServerError', 'Model', 'Moderation', 'NOT_GIVEN', 'NoneType', 'NotFoundError', 'NotGiven', 'OpenAI', 'OpenAIError', 'PermissionDeniedError', 'ProxiesTypes', 'RateLimitError', 'RequestOptions', 'Stream', 'Timeout', 'Transport', 'UnprocessableEntityError', 'VERSION', '_AmbiguousModuleClientUsageError', '_ApiType', '_AzureModuleClient', '_ModuleClient', '__all__', '__annotations__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__locals', '__name', '__name__', '__package__', '__path__', '__spec__', '__title__', '__version__', '_azure', '_base_client', '_client', '_compat', '_constants', '_exceptions', '_extras', '_files', '_has_azure_ad_credentials', '_has_azure_credentials', '_has_openai_credentials', '_httpx', '_legacy_response', '_load_client', '_models', '_module_client', '_os', '_qs', '_reset_client', '_resource', '_response', '_setup_logging', '_streaming', '_t', '_te', '_types', '_utils', '_version', 'annotations', 'api_key', 'api_type', 'api_version', 'audio', 'azure_ad_token', 'azure_ad_token_provider', 'azure_endpoint', 'base_url', 'beta', 'chat', 'completions', 'default_headers', 'default_query', 'embeddings', 'file_from_path', 'files', 'fine_tuning', 'http_client', 'images', 'lib', 'max_retries', 'models', 'moderations', 'organization', 'override', 'pagination', 'resources', 'timeout', 'types', 'version']

用 dir() 看不出哪些是類別, 函式, 與模組, 這可用下列自訂模組 members 來達成 :

# members.py
import inspect 
def varname(x): 
    return [k for k,v in inspect.currentframe().f_back.f_locals.items() if v is x][0]
def list_members(parent_obj):
    members=dir(parent_obj)
    parent_obj_name=varname(parent_obj)       
    for mbr in members:
        child_obj=eval(parent_obj_name + '.' + mbr) 
        if not mbr.startswith('_'):
            print(mbr, type(child_obj))


將此函式存成 members.py 模組, 放在目前供作目錄下, 然後匯入其 list_members() 函式來檢視 openai 套件 : 

>>> from members import list_members     
>>> list_members(openai)    
APIConnectionError <class 'type'>
APIError <class 'type'>
APIResponse <class 'type'>
APIResponseValidationError <class 'type'>
APIStatusError <class 'type'>
APITimeoutError <class 'type'>
AssistantEventHandler <class 'type'>
AsyncAPIResponse <class 'type'>
AsyncAssistantEventHandler <class 'type'>
AsyncAzureOpenAI <class 'type'>
AsyncClient <class 'type'>
AsyncOpenAI <class 'type'>
AsyncStream <class 'type'>
Audio <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
AuthenticationError <class 'type'>
AzureOpenAI <class 'type'>
BadRequestError <class 'type'>
BaseModel <class 'pydantic._internal._model_construction.ModelMetaclass'>
ChatCompletion <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
Client <class 'type'>
Completion <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
ConflictError <class 'type'>
Customer <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
DEFAULT_MAX_RETRIES <class 'int'>
DEFAULT_TIMEOUT <class 'openai.Timeout'>
Deployment <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
Edit <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
Embedding <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
Engine <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
ErrorObject <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
File <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
FineTune <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
FineTuningJob <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
Image <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
InternalServerError <class 'type'>
Model <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
Moderation <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
NOT_GIVEN <class 'openai.NotGiven'>
NoneType <class 'type'>
NotFoundError <class 'type'>
NotGiven <class 'type'>
OpenAI <class 'type'>
OpenAIError <class 'type'>
PermissionDeniedError <class 'type'>
ProxiesTypes <class 'typing._UnionGenericAlias'>
RateLimitError <class 'type'>
RequestOptions <class 'typing_extensions._TypedDictMeta'>
Stream <class 'type'>
Timeout <class 'type'>
Transport <class 'type'>
UnprocessableEntityError <class 'type'>
VERSION <class 'str'>
annotations <class '__future__._Feature'>
api_key <class 'NoneType'>
api_type <class 'NoneType'>
api_version <class 'NoneType'>
audio <class 'openai._module_client.AudioProxy'>
azure_ad_token <class 'NoneType'>
azure_ad_token_provider <class 'NoneType'>
azure_endpoint <class 'NoneType'>
base_url <class 'NoneType'>
beta <class 'openai._module_client.BetaProxy'>
chat <class 'openai._module_client.ChatProxy'>
completions <class 'openai._module_client.CompletionsProxy'>
default_headers <class 'NoneType'>
default_query <class 'NoneType'>
embeddings <class 'openai._module_client.EmbeddingsProxy'>
file_from_path <class 'function'>
files <class 'openai._module_client.FilesProxy'>
fine_tuning <class 'openai._module_client.FineTuningProxy'>
http_client <class 'NoneType'>
images <class 'openai._module_client.ImagesProxy'>
lib <class 'module'>
max_retries <class 'int'>
models <class 'openai._module_client.ModelsProxy'>
moderations <class 'openai._module_client.ModerationsProxy'>
organization <class 'NoneType'>
override <class 'function'>
pagination <class 'module'>
resources <class 'module'>
timeout <class 'openai.Timeout'>
types <class 'module'>
version <class 'module'>

可見 openai 套件有非常多成員, 但串接 GPT 大模型只會用到 OpenAI 這個類別. 


二. 串接 OpenAI API : 

串接 API 的程式語法是很制式的, 首先從 openai 套件中匯入 OpenAI 類別 :

>>> from openai import OpenAI   

接著定義一個字串變數儲存金鑰, 例如下列範例 (非真實金鑰) : 

>>> api_key='sk-tonyIqevbZyyp3eWNr1966BlbkFJpo5ls6CxBcwvSyHslay2'  

然後呼叫 OpenAI 類別的建構函式 OpenAI() 並將金鑰傳給 api_key 參數, 它會傳回一個 OpenAI 物件 : 

>>> client=OpenAI(api_key=api_key)     
>>> type(client)    
<class 'openai.OpenAI'>    

最後呼叫此 OpenAI 物件的 chat.completions.create() 方法, 並傳入 messages (詢問訊息) 與 model (語言模型) 這兩個參數, 它會傳回一個 ChatCompletion 物件 (即 GPT 生成之回應) :

>>> chat_completion=client.chat.completions.create(
    messages=[
        {"role": "user",
         "content": "請說一個好笑的笑話",
        }],
    model="gpt-3.5-turbo",
    )
>>> type(chat_completion)   
<class 'openai.types.chat.chat_completion.ChatCompletion'>

參數 message 是一個字典串列, 其中 content 屬性就是我們提出的詢問, 因為我使用的是新註冊者的免費帳戶, 所以只能使用 GTP 3.5 模型, 無法使用 GPT 4. 

關於 chat.completions.create() 方法的參數用法參考 OpenAI API 說明 :


回應物件 ChatCompletion 的內容如下 :

>>> print(chat_completion)   
ChatCompletion(id='chatcmpl-92b7YN2M39rI6XvX8eyKlxuTU2SzK', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='為什麼警察不玩撲克牌?因為他們怕抓到一隻大老鼠!哈哈哈哈哈!', role='assistant', function_call=None, tool_calls=None))], created=1710406376, model='gpt-3.5-turbo-0125', object='chat.completion', system_fingerprint='fp_4f2ebda25a', usage=CompletionUsage(completion_tokens=49, prompt_tokens=20, total_tokens=69))

print() 的輸出是擠成一團的字串, 可讀性差, 可以用第三方套件 rich 的 print() 來輸出可讀性高的物件內容, 關於 rich 參考 :


>>> import rich 
>>> rich.print(chat_completion)  
ChatCompletion(
    id='chatcmpl-92b7YN2M39rI6XvX8eyKlxuTU2SzK',
    choices=[
        Choice(
            finish_reason='stop',
            index=0,
            logprobs=None,
            message=ChatCompletionMessage(
                content='為什麼警察不玩撲克牌?因為他們怕抓到一隻大老鼠!
哈哈哈哈哈!',
                role='assistant',
                function_call=None,
                tool_calls=None
            )
        )
    ],
    created=1710406376,
    model='gpt-3.5-turbo-0125',
    object='chat.completion',
    system_fingerprint='fp_4f2ebda25a',
    usage=CompletionUsage(
        completion_tokens=49,
        prompt_tokens=20,
        total_tokens=69
    )

觀察 ChatCompletion 物件結構, 可知 GPT 的回應內容放在 choices[0].message.content 屬性裡 :

>>> response_message=chat_completion.choices[0].message.content   
>>> print(response_message)    
為什麼警察不玩撲克牌?因為他們怕抓到一隻大老鼠!哈哈哈哈哈!

完整程式碼如下 :

from openai import OpenAI

api_key='sk-tonyIqevbZyyp3eWNr1966BlbkFJpo5ls6CxBcwvSyHslay2'
client=OpenAI(api_key=api_key)
chat_completion=client.chat.completions.create(
    messages=[
        {"role": "user",
         "content": "請說一個好笑的笑話",
        }],
    model="gpt-3.5-turbo",
    )
print(chat_completion)
response_message=chat_completion.choices[0].message.content
print(response_message)

關於 OpenAI API 的 Python 語法參考官網教學文件 : 



三. 將金鑰儲存在環境變數中 : 

上面直接將金鑰寫在程式中的作法並不妥當, 因為可能在分享時不經意地洩漏出去, 比較好的做法是將金鑰存在環境變數中, 參考 :


首先在目前工作目錄下建立一個檔名為 .env 的純文字隱藏檔, 用 = 把金鑰賦予一個變數名稱, 例如 OPENAI_KEY (注意, 金鑰不可用括號括起來) :

OPENAI_KEY=sk-tonyIqevbZyyp3eWNr1966BlbkFJpo5ls6CxBcwvSyHslay2 

然後安裝第三方套件 python-decouple 或 python-dotenv, 此處以 python-dotenv 為例 :

pip install python-dotenv   

這樣就可以匯入 dotenv 模組的 load_dotenv() 函式來載入環境變數檔 .env, 並配合使用 os.getenv() 來取得環境變數中的指定金鑰了 : 

>>> from dotenv import load_dotenv     
>>> import os   
>>> load_dotenv()   
True
>>> api_key=os.environ.get('OPENAI_API')   
>>> print(api_key)  
sk-tonyIqevbZyyp3eWNr1966BlbkFJpo5ls6CxBcwvSyHslay2    

>>> client=OpenAI(api_key=api_key)     
>>> chat_completion=client.chat.completions.create(
    messages=[
        {"role": "user",
         "content": "請說一個好笑的笑話",
        }],
    model="gpt-3.5-turbo",
    )
>>> response_message=chat_completion.choices[0].message.content   
>>> print(response_message)    
為什麼警察不玩撲克牌?因為他們怕抓到一隻大老鼠!哈哈哈哈哈!

完整程式碼如下 : 

import os
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()
api_key=os.environ.get('OPENAI_API')
client=OpenAI(api_key=api_key)
chat_completion=client.chat.completions.create(
    messages=[
        {"role": "user",
         "content": "請說一個好笑的笑話",
        }],
    model="gpt-3.5-turbo",
    )
print(chat_completion)
response_message=chat_completion.choices[0].message.content
print(response_message)

這樣就可避免在程式中直接揭露金鑰的問題了. 

沒有留言 :