




版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)
文檔簡介
PromptEngineering
Author:LeeBoonstra
PromptEngineering
Acknowledgements
Contentcontributors
MichaelShermanYuanCao
ErickArmbrustAnantNawalgariaAntonioGulliSimoneCammel
CuratorsandEditors
AntonioGulliAnantNawalgariaGraceMollison
TechnicalWriter
JoeyHaymaker
Designer
MichaelLanning
February2025 2
Tableofcontents
Introduction 6
Promptengineering 7
LLMoutputconfiguration 8
Outputlength 8
Samplingcontrols 9
Temperature 9
Top-Kandtop-P 10
Puttingitalltogether 11
Promptingtechniques 13
Generalprompting/zeroshot 13
One-shot&few-shot 15
System,contextualandroleprompting 18
Systemprompting 19
Roleprompting 21
Contextualprompting 23
Step-backprompting 25
ChainofThought(CoT) 29
Self-consistency 32
TreeofThoughts(ToT) 36
ReAct(reason&act) 37
AutomaticPromptEngineering 40
Codeprompting 42
Promptsforwritingcode 42
Promptsforexplainingcode 44
Promptsfortranslatingcode 46
Promptsfordebuggingandreviewingcode 48
Whataboutmultimodalprompting? 54
BestPractices 54
Provideexamples 54
Designwithsimplicity 55
Bespecificabouttheoutput 56
UseInstructionsoverConstraints 56
Controlthemaxtokenlength 58
Usevariablesinprompts 58
Experimentwithinputformatsandwritingstyles 59
Forfew-shotpromptingwithclassificationtasks,mixuptheclasses 59
Adapttomodelupdates 60
Experimentwithoutputformats 60
JSONRepair 61
WorkingwithSchemas 62
Experimenttogetherwithotherpromptengineers 63
CoTBestpractices 64
Documentthevariouspromptattempts 64
Summary 66
Endnotes 68
PromptEngineering
February2025
PAGE
20
Youdon’tneedtobeadatascientistoramachinelearningengineer–everyonecanwriteaprompt.
Introduction
Whenthinkingaboutalargelanguagemodelinputandoutput,atextprompt(sometimesaccompaniedbyothermodalitiessuchasimageprompts)istheinputthemodeluses
topredictaspecificoutput.Youdon’tneedtobeadatascientistoramachinelearningengineer–everyonecanwriteaprompt.However,craftingthemosteffectivepromptcanbecomplicated.Manyaspectsofyourpromptaffectitsefficacy:themodelyouuse,themodel’strainingdata,themodelconfigurations,yourword-choice,styleandtone,structure,andcontextallmatter.Therefore,promptengineeringisaniterativeprocess.Inadequatepromptscanleadtoambiguous,inaccurateresponses,andcanhinderthemodel’sabilitytoprovidemeaningfuloutput.
WhenyouchatwiththeGeminichatbot,1youbasicallywriteprompts,howeverthiswhitepaperfocusesonwritingpromptsfortheGeminimodelwithinVertexAIorbyusingtheAPI,becausebypromptingthemodeldirectlyyouwillhaveaccesstotheconfigurationsuchastemperatureetc.
Thiswhitepaperdiscussespromptengineeringindetail.Wewilllookintothevariouspromptingtechniquestohelpyougettingstartedandsharetipsandbestpracticestobecomeapromptingexpert.Wewillalsodiscusssomeofthechallengesyoucanfacewhilecraftingprompts.
Promptengineering
RememberhowanLLMworks;it’sapredictionengine.Themodeltakessequentialtextasaninputandthenpredictswhatthefollowingtokenshouldbe,basedonthedataitwastrainedon.TheLLMisoperationalizedtodothisoverandoveragain,addingthepreviouslypredictedtokentotheendofthesequentialtextforpredictingthefollowingtoken.Thenext
tokenpredictionisbasedontherelationshipbetweenwhat’sintheprevioustokensandwhattheLLMhasseenduringitstraining.
Whenyouwriteaprompt,youareattemptingtosetuptheLLMtopredicttherightsequenceoftokens.Promptengineeringistheprocessofdesigninghigh-qualitypromptsthatguideLLMstoproduceaccurateoutputs.Thisprocessinvolvestinkeringtofindthebestprompt,optimizingpromptlength,andevaluatingaprompt’swritingstyleandstructureinrelation
tothetask.InthecontextofnaturallanguageprocessingandLLMs,apromptisaninputprovidedtothemodeltogeneratearesponseorprediction.
Thesepromptscanbeusedtoachievevariouskindsofunderstandingandgenerationtaskssuchastextsummarization,informationextraction,questionandanswering,textclassification,languageorcodetranslation,codegeneration,andcodedocumentationorreasoning.
PleasefeelfreetorefertoGoogle’spromptingguides2,3withsimpleandeffectivepromptingexamples.
Whenpromptengineering,youwillstartbychoosingamodel.Promptsmightneedtobeoptimizedforyourspecificmodel,regardlessofwhetheryouuseGeminilanguagemodelsinVertexAI,GPT,Claude,oranopensourcemodellikeGemmaorLLaMA.
Besidestheprompt,youwillalsoneedtotinkerwiththevariousconfigurationsofaLLM.
LLMoutputconfiguration
Onceyouchooseyourmodelyouwillneedtofigureoutthemodelconfiguration.MostLLMscomewithvariousconfigurationoptionsthatcontroltheLLM’soutput.Effectivepromptengineeringrequiressettingtheseconfigurationsoptimallyforyourtask.
Outputlength
Animportantconfigurationsettingisthenumberoftokenstogenerateinaresponse.GeneratingmoretokensrequiresmorecomputationfromtheLLM,leadingtohigherenergyconsumption,potentiallyslowerresponsetimes,andhighercosts.
ReducingtheoutputlengthoftheLLMdoesn’tcausetheLLMtobecomemorestylisticallyortextuallysuccinctintheoutputitcreates,itjustcausestheLLMtostoppredictingmoretokensoncethelimitisreached.Ifyourneedsrequireashortoutputlength,you’llalsopossiblyneedtoengineeryourprompttoaccommodate.
OutputlengthrestrictionisespeciallyimportantforsomeLLMpromptingtechniques,likeReAct,wheretheLLMwillkeepemittinguselesstokensaftertheresponseyouwant.
Beaware,generatingmoretokensrequiresmorecomputationfromtheLLM,leadingtohigherenergyconsumptionandpotentiallyslowerresponsetimes,whichleadstohighercosts.
Samplingcontrols
LLMsdonotformallypredictasingletoken.Rather,LLMspredictprobabilitiesforwhatthenexttokencouldbe,witheachtokenintheLLM’svocabularygettingaprobability.Thosetokenprobabilitiesarethensampledtodeterminewhatthenextproducedtokenwillbe.
Temperature,top-K,andtop-Parethemostcommonconfigurationsettingsthatdeterminehowpredictedtokenprobabilitiesareprocessedtochooseasingleoutputtoken.
Temperature
Temperaturecontrolsthedegreeofrandomnessintokenselection.Lowertemperaturesaregoodforpromptsthatexpectamoredeterministicresponse,whilehighertemperaturescanleadtomorediverseorunexpectedresults.Atemperatureof0(greedydecoding)is
deterministic:thehighestprobabilitytokenisalwaysselected(thoughnotethatiftwotokenshavethesamehighestpredictedprobability,dependingonhowtiebreakingisimplementedyoumaynotalwaysgetthesameoutputwithtemperature0).
Temperaturesclosetothemaxtendtocreatemorerandomoutput.Andastemperaturegetshigherandhigher,alltokensbecomeequallylikelytobethenextpredictedtoken.
TheGeminitemperaturecontrolcanbeunderstoodinasimilarwaytothesoftmaxfunctionusedinmachinelearning.Alowtemperaturesettingmirrorsalowsoftmaxtemperature(T),emphasizingasingle,preferredtemperaturewithhighcertainty.AhigherGeminitemperaturesettingislikeahighsoftmaxtemperature,makingawiderrangeoftemperaturesaround
theselectedsettingmoreacceptable.Thisincreaseduncertaintyaccommodatesscenarioswherearigid,precisetemperaturemaynotbeessentiallikeforexamplewhenexperimentingwithcreativeoutputs.
Top-Kandtop-P
Top-Kandtop-P(alsoknownasnucleussampling)4aretwosamplingsettingsusedinLLMstorestrictthepredictednexttokentocomefromtokenswiththetoppredictedprobabilities.Liketemperature,thesesamplingsettingscontroltherandomnessanddiversityofgeneratedtext.
Top-KsamplingselectsthetopKmostlikelytokensfromthemodel’spredicteddistribution.Thehighertop-K,themorecreativeandvariedthemodel’soutput;thelowertop-K,themorerestiveandfactualthemodel’soutput.Atop-Kof1isequivalenttogreedydecoding.
Top-Psamplingselectsthetoptokenswhosecumulativeprobabilitydoesnotexceedacertainvalue(P).ValuesforPrangefrom0(greedydecoding)to1(alltokensintheLLM’svocabulary).
Thebestwaytochoosebetweentop-Kandtop-Pistoexperimentwithbothmethods(orbothtogether)andseewhichoneproducestheresultsyouarelookingfor.
Puttingitalltogether
Choosingbetweentop-K,top-P,temperature,andthenumberoftokenstogenerate,dependsonthespecificapplicationanddesiredoutcome,andthesettingsallimpactoneanother.It’salsoimportanttomakesureyouunderstandhowyourchosenmodelcombinesthedifferentsamplingsettingstogether.
Iftemperature,top-K,andtop-Pareallavailable(asinVertexStudio),tokensthatmeetboththetop-Kandtop-Pcriteriaarecandidatesforthenextpredictedtoken,andthen
temperatureisappliedtosamplefromthetokensthatpassedthetop-Kandtop-Pcriteria.Ifonlytop-Kortop-Pisavailable,thebehavioristhesamebutonlytheonetop-KorPsettingisused.
Iftemperatureisnotavailable,whatevertokensmeetthetop-Kand/ortop-Pcriteriaarethenrandomlyselectedfromtoproduceasinglenextpredictedtoken.
Atextremesettingsofonesamplingconfigurationvalue,thatonesamplingsettingeithercancelsoutotherconfigurationsettingsorbecomesirrelevant.
Ifyousettemperatureto0,top-Kandtop-Pbecomeirrelevant–themostprobabletokenbecomesthenexttokenpredicted.Ifyousettemperatureextremelyhigh(above1–generallyintothe10s),temperaturebecomesirrelevantandwhatevertokensmake
itthroughthetop-Kand/ortop-Pcriteriaarethenrandomlysampledtochooseanextpredictedtoken.
Ifyousettop-Kto1,temperatureandtop-Pbecomeirrelevant.Onlyonetokenpassesthetop-Kcriteria,andthattokenisthenextpredictedtoken.Ifyousettop-Kextremelyhigh,liketothesizeoftheLLM’svocabulary,anytokenwithanonzeroprobabilityofbeingthenexttokenwillmeetthetop-Kcriteriaandnoneareselectedout.
Ifyousettop-Pto0(oraverysmallvalue),mostLLMsamplingimplementationswillthenonlyconsiderthemostprobabletokentomeetthetop-Pcriteria,makingtemperatureandtop-Kirrelevant.Ifyousettop-Pto1,anytokenwithanonzeroprobabilityofbeingthenexttokenwillmeetthetop-Pcriteria,andnoneareselectedout.
Asageneralstartingpoint,atemperatureof.2,top-Pof.95,andtop-Kof30willgiveyourelativelycoherentresultsthatcanbecreativebutnotexcessivelyso.Ifyouwantespeciallycreativeresults,trystartingwithatemperatureof.9,top-Pof.99,andtop-Kof40.Andifyouwantlesscreativeresults,trystartingwithatemperatureof.1,top-Pof.9,andtop-Kof20.
Finally,ifyourtaskalwayshasasinglecorrectanswer(e.g.,answeringamathproblem),startwithatemperatureof0.
NOTE:Withmorefreedom(highertemperature,top-K,top-P,andoutputtokens),theLLMmightgeneratetextthatislessrelevant.
WARNING:Haveyoueverseenaresponseendingwithalargeamountoffillerwords?Thisisalsoknownasthe"repetitionloopbug",whichisacommonissueinLargeLanguageModelswherethemodelgetsstuckinacycle,repeatedlygeneratingthesame(filler)word,phrase,orsentencestructure,oftenexacerbatedbyinappropriatetemperatureandtop-k/
top-psettings.Thiscanoccuratbothlowandhightemperaturesettings,thoughfordifferentreasons.Atlowtemperatures,themodelbecomesoverlydeterministic,stickingrigidlytothehighestprobabilitypath,whichcanleadtoaloopifthatpathrevisitspreviouslygeneratedtext.Conversely,athightemperatures,themodel'soutputbecomesexcessivelyrandom,increasingtheprobabilitythatarandomlychosenwordorphrasewill,bychance,leadbacktoapriorstate,creatingaloopduetothevastnumberofavailableoptions.Inbothcases,themodel'ssamplingprocessgets"stuck,"resultinginmonotonousandunhelpfuloutputuntiltheoutputwindowisfilled.Solvingthisoftenrequirescarefultinkeringwithtemperatureandtop-k/top-pvaluestofindtheoptimalbalancebetweendeterminismandrandomness.
Promptingtechniques
LLMsaretunedtofollowinstructionsandaretrainedonlargeamountsofdatasotheycanunderstandapromptandgenerateananswer.ButLLMsaren’tperfect;thecleareryourprompttext,thebetteritisfortheLLMtopredictthenextlikelytext.Additionally,specifictechniquesthattakeadvantageofhowLLMsaretrainedandhowLLMsworkwillhelpyougettherelevantresultsfromLLMs
Nowthatweunderstandwhatpromptengineeringisandwhatittakes,let’sdiveintosomeexamplesofthemostimportantpromptingtechniques.
Generalprompting/zeroshot
Azero-shot5promptisthesimplesttypeofprompt.ItonlyprovidesadescriptionofataskandsometextfortheLLMtogetstartedwith.Thisinputcouldbeanything:aquestion,astartofastory,orinstructions.Thenamezero-shotstandsfor’noexamples’.
Let’suseVertexAIStudio(forLanguage)inVertexAI,6whichprovidesaplaygroundtotestprompts.InTable1,youwillseeanexamplezero-shotprompttoclassifymoviereviews.
Thetableformatasusedbelowisagreatwayofdocumentingprompts.Yourpromptswilllikelygothroughmanyiterationsbeforetheyendupinacodebase,soit’simportanttokeeptrackofyourpromptengineeringworkinadisciplined,structuredway.Moreonthistableformat,theimportanceoftrackingpromptengineeringwork,andthepromptdevelopmentprocessisintheBestPracticessectionlaterinthischapter(“Documentthevariouspromptattempts”).
Themodeltemperatureshouldbesettoalownumber,sincenocreativityisneeded,andweusethegemini-prodefaulttop-Kandtop-Pvalues,whicheffectivelydisablebothsettings(see‘LLMOutputConfiguration’above).Payattentiontothegeneratedoutput.Thewordsdisturbingandmasterpieceshouldmakethepredictionalittlemorecomplicated,asbothwordsareusedinthesamesentence.
NameGoalModel
TemperatureTop-KPrompt
1_1_movie_classification
Classifymoviereviewsaspositive,neutralornegative.gemini-pro
0.1 TokenLimit 5
N/A Top-P 1
ClassifymoviereviewsasPOSITIVE,NEUTRALorNEGATIVE.Review:"Her"isadisturbingstudyrevealingthedirectionhumanityisheadedifAIisallowedtokeepevolving,unchecked.Iwishthereweremoremovieslikethismasterpiece.Sentiment:
Output POSITIVE
Table1.Anexampleofzero-shotprompting
Whenzero-shotdoesn’twork,youcanprovidedemonstrationsorexamplesintheprompt,whichleadsto“one-shot”and“few-shot”prompting.Generalprompting/zeroshot
One-shot&few-shot
WhencreatingpromptsforAImodels,itishelpfultoprovideexamples.Theseexamplescanhelpthemodelunderstandwhatyouareaskingfor.Examplesareespeciallyusefulwhenyouwanttosteerthemodeltoacertainoutputstructureorpattern.
Aone-shotprompt,providesasingleexample,hencethenameone-shot.Theideaisthemodelhasanexampleitcanimitatetobestcompletethetask.
Afew-shotprompt7providesmultipleexamplestothemodel.Thisapproachshowsthemodelapatternthatitneedstofollow.Theideaissimilartoone-shot,butmultipleexamplesofthedesiredpatternincreasesthechancethemodelfollowsthepattern.
Thenumberofexamplesyouneedforfew-shotpromptingdependsonafewfactors,includingthecomplexityofthetask,thequalityoftheexamples,andthecapabilitiesofthegenerativeAI(genAI)modelyouareusing.Asageneralruleofthumb,youshoulduseatleastthreetofiveexamplesforfew-shotprompting.However,youmayneedtousemoreexamplesformorecomplextasks,oryoumayneedtousefewerduetotheinputlengthlimitationofyourmodel.
Table2showsafew-shotpromptexample,let’susethesamegemini-promodelconfigurationsettingsasbefore,otherthanincreasingthetokenlimittoaccommodatetheneedforalongerresponse.
Goal
ParsepizzaorderstoJSON
Prompt
Parseacustomer'spizzaorderintovalidJSON:
EXAMPLE:
Iwantasmallpizzawithcheese,tomatosauce,andpepperoni.JSONResponse:
```
{
"size":"small",
"type":"normal",
"ingredients":[["cheese","tomatosauce","peperoni"]]
}
```
Continuesnextpage...
1
Top-P
N/A
Top-K
250
TokenLimit
0.1
Temperature
gemini-pro
Model
Prompt
EXAMPLE:
CanIgetalargepizzawithtomatosauce,basilandmozzarella
{
"size":"large",
"type":"normal",
"ingredients":[["tomatosauce","bazel","mozzarella"]]
}
Now,Iwouldlikealargepizza,withthefirsthalfcheeseandmozzarella.Andtheothertomatosauce,hamandpineapple.
JSONResponse:
Output
```
{
"size":"large",
"type":"half-half",
"ingredients":[["cheese","mozzarella"],["tomatosauce","ham","pineapple"]]
}
```
Table2.Anexampleoffew-shotprompting
Whenyouchooseexamplesforyourprompt,useexamplesthatarerelevanttothetaskyouwanttoperform.Theexamplesshouldbediverse,ofhighquality,andwellwritten.Onesmallmistakecanconfusethemodelandwillresultinundesiredoutput.
Ifyouaretryingtogenerateoutputthatisrobusttoavarietyofinputs,thenitisimportanttoincludeedgecasesinyourexamples.Edgecasesareinputsthatareunusualorunexpected,butthatthemodelshouldstillbeabletohandle.
System,contextualandroleprompting
System,contextualandrolepromptingarealltechniquesusedtoguidehowLLMsgeneratetext,buttheyfocusondifferentaspects:
Systempromptingsetstheoverallcontextandpurposeforthelanguagemodel.Itdefinesthe‘bigpicture’ofwhatthemodelshouldbedoing,liketranslatingalanguage,classifyingareviewetc.
Contextualpromptingprovidesspecificdetailsorbackgroundinformationrelevanttothecurrentconversationortask.Ithelpsthemodeltounderstandthenuancesofwhat’sbeingaskedandtailortheresponseaccordingly.
Rolepromptingassignsaspecificcharacteroridentityforthelanguagemodeltoadopt.Thishelpsthemodelgenerateresponsesthatareconsistentwiththeassignedroleanditsassociatedknowledgeandbehavior.
Therecanbeconsiderableoverlapbetweensystem,contextual,androleprompting.E.g.apromptthatassignsaroletothesystem,canalsohaveacontext.
However,eachtypeofpromptservesaslightlydifferentprimarypurpose:
Systemprompt:Definesthemodel’sfundamentalcapabilitiesandoverarchingpurpose.
Contextualprompt:Providesimmediate,task-specificinformationtoguidetheresponse.It’shighlyspecifictothecurrenttaskorinput,whichisdynamic.
Roleprompt:Framesthemodel’soutputstyleandvoice.Itaddsalayerofspecificityandpersonality.
Distinguishingbetweensystem,contextual,androlepromptsprovidesaframeworkfordesigningpromptswithclearintent,allowingforflexiblecombinationsandmakingiteasiertoanalyzehoweachprompttypeinfluencesthelanguagemodel’soutput.
Let’sdiveintothesethreedifferentkindsofprompts.
Systemprompting
Table3containsasystemprompt,whereIspecifyadditionalinformationonhowtoreturntheoutput.Iincreasedthetemperaturetogetahighercreativitylevel,andIspecifiedahighertokenlimit.However,becauseofmyclearinstructiononhowtoreturntheoutputthemodeldidn’treturnextratext.
Goal
Classifymoviereviewsaspositive,neutralornegative.
Output
NEGATIVE
Classifymoviereviewsaspositive,neutralornegative.Onlyreturnthelabelinuppercase.
Review:"Her"isadisturbingstudyrevealingthedirectionhumanityisheadedifAIisallowedtokeepevolving,unchecked.It'ssodisturbingIcouldn'twatchit.
Sentiment:
Prompt
0.8
Top-P
40
Top-K
5
TokenLimit
1
Temperature
gemini-pro
Model
Table3.Anexampleofsystemprompting
Systempromptscanbeusefulforgeneratingoutputthatmeetsspecificrequirements.Thename‘systemprompt’actuallystandsfor‘providinganadditionaltasktothesystem’.Forexample,youcoulduseasystemprompttogenerateacodesnippetthatiscompatiblewithaspecificprogramminglanguage,oryoucoulduseasystemprompttoreturnacertainstructure.HavealookintoTable4,whereIreturntheoutputinJSONformat.
Goal
Classifymoviereviewsaspositive,neutralornegative,returnJSON.
Output
```
{
"movie_reviews":[
{
"sentiment":"NEGATIVE","name":"Her"
}
]
}
```
Classifymoviereviewsaspositive,neutralornegative.ReturnvalidJSON:
Review:"Her"isadisturbingstudyrevealingthedirectionhumanityisheadedifAIisallowedtokeepevolving,unchecked.It'ssodisturbingIcouldn'twatchit.
Schema:
```MOVIE:
{
"sentiment":String"POSITIVE"|"NEGATIVE"|"NEUTRAL","name":String
}
MOVIEREVIEWS:
{
"movie_reviews":[MOVIE]
}
```
JSONResponse:
Prompt
0.8
Top-P
40
Top-K
1024
TokenLimit
1
Temperature
gemini-pro
Model
Table4.AnexampleofsystempromptingwithJSONformat
TherearesomebenefitsinreturningJSONobjectsfromapromptthatextractsdata.Inareal-worldapplicationIdon’tneedtomanuallycreatethisJSONformat,Icanalready
returnthedatainasortedorder(veryhandywhenworkingwithdatetimeobjects),butmostimportantly,bypromptingforaJSONformatitforcesthemodeltocreateastructureandlimithallucinations.
Systempromptscanalsobereallyusefulforsafetyandtoxicity.Tocontroltheoutput,simplyaddanadditionallinetoyourpromptlike:‘Youshouldberespectfulinyouranswer.’.
Roleprompting
RolepromptingisatechniqueinpromptengineeringthatinvolvesassigningaspecificroletothegenAImodel.Thiscanhelpthemodeltogeneratemorerelevantandinformativeoutput,asthemodelcancraftitsresponsestothespecificrolethatithasbeenassigned.
Forexample,youcouldrolepromptagenAImodeltobeabookeditor,akindergartenteacher,oramotivationalspeaker.Oncethemodelhasbeenassignedarole,youcanthengiveitpromptsthatarespecifictothatrole.Forexample,youcouldpromptateachertocreatealessonplanthattheycanthenreview.HavealookintoTable5,wherethemodeltakestheroleofatravelguide.
GoalModel
TemperatureTop-KPrompt
Actastravelguideandprovide3travelsuggestionsgemini-pro
1 TokenLimit 1024
40 Top-P 0.8
Iwantyoutoactasatravelguide.Iwillwritetoyouaboutmylocationandyouwillsuggest3placestovisitnearme.Insomecases,IwillalsogiveyouthetypeofplacesIwillvisit.
Mysuggestion:"IaminAmsterdamandIwanttovisitonlymuseums."
TravelSuggestions:
Output
Rijksmuseum:World-renownedmuseum,knownforDutchmasterpieceslikeRembrandt’s“TheNightWatch.”Don’tmisstheincrediblecollectionofpaintingsandartifactsshowcasingDutchartandhistory.
VanGoghMuseum:ImmerseyourselfinthecaptivatingworksofVincentvanGogh.Themuseumhousesthelargestcollectionofhispaintingsanddrawings,including“TheStarryNight”and“Sunflowers.”
StedelijkMuseumAmsterdam:Discovermodernandcontemporaryartfromaroundtheworld.Themuseum’scollectionincludesworksbyPicasso,Kandinsky,andRietveld,housedinastrikingmodernbuilding.
Table5.Anexampleofroleprompting
Theaboveexampleshowsanexampleoftakingtheroleofatravelagent.Whenyouwouldchangetheroletoateacherofgeography,youwouldnoticethatyouwillreceiveadifferentresponse.
DefiningaroleperspectiveforanAImodelgivesitablueprintofthetone,style,andfocusedexpertiseyou’relookingfortoimprovethequality,relevance,andeffectivenessofyouroutput.
HerearesomestylesyoucanchoosefromwhichIfindeffective:
Confrontational,Descriptive,Direct,Formal,Humorous,Influential,Informal,Inspirational,Persuasive
Let’schangeourpromptinTable6toincludeahumorousandinspirationalstyle.
Prompt
Iwantyoutoactasatravelguide.Iwillwritetoyouaboutmylocationandyouwillsuggest3placestovisitnearmeinahumorousstyle.
Mysuggestion:"IaminManhattan."TravelSuggestions:
Output
BeholdtheEmpireStateofMind:AscendtothedizzyingheightsoftheEmpireStateBuildingandbaskinthegloryofManhattan'sskyline.PreparetofeellikeKingKongatoptheBigApple,minusthegiantape-sizedbanana.
GetArtsy-FartsyatMoMA:UnleashyourinnerartaficionadoattheMuseumofModernArt(MoMA).Gazeuponmasterpiecesthatwillboggleyour
mindandmakeyouquestionwhetheryourstick-figuredrawingshaveanyartisticmerit.
Shop‘TilYouDroponFifthAvenue:IndulgeinsomeretailtherapyontheiconicFifthAvenue.Braceyourselfforstickershockasyouwindow-shop(oractuallyshop)atdesignerboutiquesthatwillmakeyourwalletcry.Buthey,you’reinManhattan,soyoumightaswellembracethefabulousness!
Table6.Anexampleofrolepromptingwithahumoroustoneandstyle
Contextualprompting
Byprovidingcontextualprompts,youcanhelpensurethatyourAIinteractionsareasseamlessandefficientaspossible.Themodelwillbeabletomorequicklyunderstandyourrequestandbeabletogeneratemoreaccurateandrelevantresponses,asyoucanseeintheexampleofTable7.
GoalModel
TemperatureTop-KPrompt
Output
Suggestarticlesforablogaboutretrogamesgemini-pro
1 TokenLimit 1024
40 Top-P 0.8
Context:Youarewritingforablogaboutretro80'sarcadevideogames.
Suggest3topicstowriteanarticleaboutwithafewlinesofdescriptionofwhatthisarticleshouldcontain.
**TheEvolutionofArcadeCabinetDesign:**
Thisarticlewouldexploretheevolutionofarcadecabinetdesigns,fromtheearlywoodandmetalcabinetsofthe1970stothesleek,neon-litdesignsofthe1980sandbeyond.
**BlastFromThePast:IconicArcadeGamesofThe80’s:**
Thisarticlewouldfeatureaglimpseintothehistory,impact,andlegacyofsomeofthemosticonicandinfluentialarcadegamesofthe1980s,highlightingtheirinnovations,popularmechanics,andenduringcharm.
**TheRiseandRetroRevivalofPixelArt:**
Thisarticlewoulddelveintotheevolutionofpixelartasadefiningvisualstyleofearlyarcadegames,tracingitsroots,exploringitstechniques,anddiscussingtheresurgenceofpixelartinmoderngamesanddigitalart.
Table7.Anexampleofcontextualprom
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
- 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 2025年財務(wù)管理專業(yè)畢業(yè)考試試題及答案
- 2025年酒店管理專業(yè)畢業(yè)考試試卷及答案
- 2025年建筑工程項目管理考試試題及答案
- 汽車緊急救援體系構(gòu)建
- 健康主題課件命名
- 包裝設(shè)計核心要素解析
- 公主服裝設(shè)計課件
- 湖南省瀏陽市2022年九年級《道德》上學(xué)期期中試題與參考答案
- 打掃衛(wèi)生前后標準流程
- 教育信息化背景下資源高效利用策略探討
- 高中生物必修一實驗通知單
- 運動員健康證明表
- 課件:第四章 社會工作項目的執(zhí)行(《社會工作項目策劃與評估》課程)
- 冷庫施工組織設(shè)計施工方案
- 咯血診斷與治療課件
- 醫(yī)學(xué)影像專業(yè)個人簡歷
- 檢驗科 醫(yī)院感染管理質(zhì)量督查評分表
- 獨立性檢驗 公開課比賽一等獎-完整版獲獎?wù)n件
- 網(wǎng)絡(luò)信息系統(tǒng)癱瘓演練PDCA改進
- 高分子材料成型加工基礎(chǔ)添加劑及配方設(shè)計課件
- 水泥水化熱實驗原始記錄
評論
0/150
提交評論